Examples: Zero-, One-, and Few-Shot
Use demonstrations to steer behavior, balancing exemplar quality, order effects, and when to skip examples entirely.
Content
One-Shot Demonstrations
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
One-Shot Demonstrations — The Mic-Drop Demo for Prompts
You already fed the model solid context and learned how to pin sources — now give it one clean example and watch it generalize. Like teaching someone to dance by showing one perfect move.
What is a one-shot demonstration (and why it's the sweet spot)
A one-shot demonstration is when you give the model exactly one worked example of the input→output mapping you want, then ask it to do the same for a new input. It's the middle child between zero-shot (no examples) and few-shot (many examples). One-shot is lean, directive, and often surprisingly powerful.
Use one-shot when:
- You have a clear, repeatable format to teach.
- You want stronger guidance than zero-shot but don't want to bloat the prompt with lots of examples.
- You're testing how well the model generalizes from a single exemplar.
Why pick one-shot over the others? Short answer: efficiency + specificity. Long answer: models are pattern-matchers; one good pattern often nudges behavior in predictable ways without overwhelming context windows.
Anatomy of a clean one-shot prompt (builds on your grounding practices)
You already learned about structured context blocks, delimiters, and source pinning. Great — now we combine those with a single demonstration.
Key parts:
- System or instruction block — highest-level goals (tone, constraints).
- Grounding / Context block — facts, pinned sources, timestamps (if needed).
- Delimiter — separate the example from other context to prevent leakage.
- One-shot example — one input and its expected output, clearly labeled.
- New task — the fresh input the model should apply the pattern to.
A few rules of thumb:
- Always label the example as Example / Demonstration. Models like explicit signage.
- Use delimiters (e.g., ===CONTEXT===,
EXAMPLE) to avoid context bleeding. - Tell the model not to repeat internal context in final outputs unless requested.
Example: Legal clause → Plain-English bullets (with pinned source)
Imagine you want the model to convert dense legal clauses into 2–3 plain-English bullet points. Here's a pragmatic, safe one-shot prompt that builds on your previous grounding work.
SYSTEM: You are a concise legal-summaries assistant. Do not invent facts. If unsure, say "Insufficient information." Use 2-3 bullets, each <= 20 words.
===PINNED SOURCE===
Source: Master_Service_Agreement_v3.pdf (pinned)
Last-updated: 2026-02-01
===END PINNED SOURCE===
===DEMONSTRATION===
Input Clause:
"The Provider shall indemnify and hold harmless the Client from any third-party claims resulting from Provider's negligence, excluding claims arising from Client's gross negligence or willful misconduct."
Desired Output:
- Provider pays for third-party claims caused by Provider negligence.
- Client not covered for claims due to its own gross negligence or willful misconduct.
===END DEMONSTRATION===
NEW INPUT:
"If either party delays delivery beyond 30 days due to force majeure, the other party may suspend performance without termination rights, unless delay exceeds 120 days."
Task: Provide a 2-3 bullet plain-English summary, following the demonstration format. Do not include the pinned source text in your output.
Why this works: the pinned source gives legal context (prevent stale/conflicting facts), delimiters prevent leakage, and the one-shot shows the exact style and brevity you want.
When one-shot fails (and how to fix it)
Common pitfalls:
- Overfitting to the example: The model parrots structure but misses nuance. Fix: pick an example that includes the edge cases.
- Ambiguous instruction: If the example doesn't expose a rule, the model guesses. Fix: annotate the example with short comments or constraints.
- Stale example: If example relies on out-of-date facts, update it or include a timestamp in the pinned source.
Pro tips:
- If you see the model repeating example-specific words too literally, add: "Generalize—do not reuse specific example wording unless present in new input."
- If you need stylistic variety, include a label: "Tone: Formal / Friendly" in the system block.
Quick comparison: Zero-, One-, Few-Shot (cheat-sheet)
| Mode | When to use | Pros | Cons |
|---|---|---|---|
| Zero-shot | When task is high-level or the model already knows the domain | Fast; minimal prompt | Less predictable; needs strong instruction |
| One-shot | When you need a clear mapping but small prompt | Efficient guidance; consistent style | May under-specify edge cases |
| Few-shot | When you need robust coverage of variations | High reliability across edge cases | Larger prompt; costlier; longer to craft |
Exercises: Try these prompts and notice the difference
- Swap the demonstration to an intentionally poor example and see how output degrades. What changed?
- Add a second demonstration and compare results — did it improve reliability? Where did it help most?
- Remove the delimiters and test: do you get context leakage (the model echoing internal notes)?
Ask yourself: Is the model learning a rule or just copying phrasing? That detective habit pays off.
Closing: Key takeaways (aka the mic-drop)
- One-shot is your low-friction teacher. Give one clear example and the model will often replicate the mapping cleanly.
- Marry one-shot to grounding. Use pinned sources and delimiters to keep facts fresh and prevent leakage — you already know this from "Supplying Context and Grounding."
- Watch for overfitting. If the model is too literal, tweak the example or add a tiny generalization note.
Remember: the best prompts are experiments. Change one variable (example, delimiter, instruction) at a time and measure. Your next breakthrough is one tiny tweak away — probably the one that makes the model stop sounding like a robot and start sounding like an expert who actually cares.
Final challenge: create a one-shot prompt that teaches the model to turn an email into a 3-part response: summary, action items, tone score (1–5). Pin a relevant policy, include one demo, and see how it performs. Report back with receipts (and a meme).
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!