Core Principles of Prompt Engineering
Adopt guiding principles—clarity, specificity, grounding, and iteration—to consistently steer models toward desired outcomes.
Content
Example-Driven Guidance
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Example-Driven Guidance for Prompt Engineering (Core Principles Continued)
You already know LLMs are moody, literal, and easily distracted. Now lets teach them to behave like useful interns instead of chaotic fortune-tellers.
Hook: A tiny experiment you can do in 30 seconds
Ask an LLM: "Summarize the article about sustainable urban gardening."
Then ask: "Summarize the article about sustainable urban gardening for a 10-year-old who loves video games, in 3 bullets. Include one practical tip and one common myth. Keep it friendly and cite any claims."
Same task. Wildly different results. Thats the power of prompt engineering, and example-driven prompts are the cheat codes.
What this section is about
This builds on what you learned about context and grounding and audience and tone control, and the prior module on LLM behavior (sensitivity to phrasing, non-determinism, alignment). Here we dive into example-driven guidance: how to craft prompts that use concrete examples, demonstrations, and iterative refinements so the model reliably produces the result you want.
Think of example-driven prompting as teaching by showing, not just telling. Humans learn faster with examples. So do LLMs.
Why examples beat vague instructions
- Reduces ambiguity. Instead of relying on the model to guess your preferred structure, you give it a target to imitate.
- Anchors style and format. Demonstrations lock tone, length, and structure more tightly than adjectives like 'concise' or 'funny'.
- Makes evaluation clearer. When you provide a gold-standard example, you can compare outputs programmatically.
Example-driven prompts are like giving the model a tiny template plus a role model. Its the difference between "make me a sandwich" and "make me a grilled cheese like this picture".
Patterns and templates that work (with examples)
1) Example + instruction + input (the imitate pattern)
Pattern:
- Provide a short example of desired output for a similar input
- Give the new input and ask model to produce the same style
Example:
Example output (for input about composting):
- 2-sentence intro
- 3 numbered steps, each 1 sentence
- one myth to debunk at the end
Now do the same for: 'sustainable urban gardening' (article link: [provide link]).
Why it works: model now has a concrete target to copy: structure, brevity, and the myth-debunk slot.
2) Few-shot demonstration for format and tone
Pattern: show 2-3 labeled examples with varied tones and then request a new output.
Example:
Input: 'Article A' -> Output (for policymakers): concise, formal
Input: 'Article B' -> Output (for teenagers): playful, 3 bullets with emoji
Now: Input: 'Article C' -> Output: like the teenagers example
This is especially powerful for audience control because youre showing exactly how tone maps to structure.
3) Error-correction example (show bad then good)
Pattern: show a bad example + corrected good example, then ask to improve a new draft.
Example:
Bad summary: too long, vague, no source
Good summary: 50 words, 2 facts with short citations
Now improve this draft: '...'
Why it works: model learns transform operation, not just output style.
Iterative refinement workflow (practical steps)
- Define success criteria: format, length, audience, factuality threshold.
- Write a first prompt using one of the patterns above.
- Run the model at a few settings (temperature low for deterministic; higher for creative).
- Compare outputs to example(s). Note consistent errors.
- Add a corrective example or constraint and rerun. Repeat until metrics satisfied.
Questions to ask while iterating:
- Is it hallucinating facts? Add grounding and ask for citations.
- Is the tone off? Drop in a more specific example of tone.
- Too verbose? Provide length-limited example.
Quick reference table: bad prompt vs example-driven prompt
| Problem | Bad prompt | Example-driven fix |
|---|---|---|
| Vague format | 'Summarize article' | Provide example summary, ask to match style and length |
| Wrong audience | 'Explain this' | Give an example for the target audience and ask to emulate |
| Hallucinations | 'List facts' | Give an example item with citation format and ask to cite sources |
Concrete iterative example: converting a research abstract into a press release
- Bad prompt:
Write a press release for this abstract.
Result: generic, mismatched tone.
- Example-driven prompt:
Example press release for study X:
- 1-sentence hook
- 2 short paragraphs for findings
- quote from lead author
Now, using the same format and tone, write a press release for this abstract: [paste abstract]. Limit to 200 words. Include one simplified statistic and one quote attributed to the first author.
Result: predictable structure, correct tone, and easier evaluation.
Tips, traps, and pro moves
- Use counter-examples: show both what you want and what you dont want.
- Anchor with grounding: paste facts, data, or URLs in the prompt to reduce hallucination.
- Control randomness: set temperature low for reproducibility; sample at different temps for variety when exploring.
- Keep few-shot examples short and focused; too many examples can confuse the model.
- Programmatic testing: generate 50 outputs and compute simple metrics like average length, keyword presence, and citation format.
Pro tip: For alignment-sensitive tasks, include an example where the model refuses politely when asked to do something unsafe, then ask it to follow that refusal behaviour.
Closing: how this ties back to earlier lessons
You already learned that LLMs are sensitive to phrasing, non-deterministic, and need grounding. Example-driven prompting takes those problems and turns them into tools: specificity reduces sensitivity, examples reduce non-determinism, and grounding examples reduce hallucination.
Key takeaways:
- Examples are the fastest way to teach a model your preferences.
- Combine examples with grounding, audience control, and iterative testing for reliable outputs.
- Measure, iterate, and be explicit: models are obedient mimics, not mind-readers.
Go try it: pick a mundane task you do every week and create a one-example prompt that makes the model do it right. If it still messes up, add a corrective example and try again. Repeat until your virtual intern behaves.
Version note: this is the continuation of core principles; for more on grounding and audience templates revisit positions 5 and 4 respectively.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!