jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Generative AI: Prompt Engineering Basics
Chapters

1Foundations of Generative AI

2LLM Behavior and Capabilities

3Core Principles of Prompt Engineering

4Writing Clear, Actionable Instructions

5Roles, Personas, and System Prompts

6Supplying Context and Grounding

7Examples: Zero-, One-, and Few-Shot

When to Use Zero-ShotOne-Shot DemonstrationsFew-Shot Prompt PatternsSelecting Quality ExemplarsCounterexamples for BoundariesOrder and Primacy EffectsFormatting Exemplars CleanlyLabel and Schema ConsistencyDifficulty Gradient DesignInput–Output Pair CraftingContrastive Example PairsMinimal Pair ConstructionAvoiding Selection BiasDecision Boundary IllustrationKnowing When to Skip Examples

8Structuring Outputs and Formats

9Reasoning and Decomposition Techniques

10Iteration, Testing, and Prompt Debugging

11Evaluation, Metrics, and Quality Control

12Safety, Ethics, and Risk Mitigation

13Tools, Functions, and Agentic Workflows

14Retrieval-Augmented Generation (RAG)

15Multimodal and Advanced Prompt Patterns

Courses/Generative AI: Prompt Engineering Basics/Examples: Zero-, One-, and Few-Shot

Examples: Zero-, One-, and Few-Shot

22847 views

Use demonstrations to steer behavior, balancing exemplar quality, order effects, and when to skip examples entirely.

Content

2 of 15

One-Shot Demonstrations

One-Shot: The Mic-Drop Demo
3754 views
beginner
humorous
ai
education
gpt-5-mini
3754 views

Versions:

One-Shot: The Mic-Drop Demo

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

One-Shot Demonstrations — The Mic-Drop Demo for Prompts

You already fed the model solid context and learned how to pin sources — now give it one clean example and watch it generalize. Like teaching someone to dance by showing one perfect move.


What is a one-shot demonstration (and why it's the sweet spot)

A one-shot demonstration is when you give the model exactly one worked example of the input→output mapping you want, then ask it to do the same for a new input. It's the middle child between zero-shot (no examples) and few-shot (many examples). One-shot is lean, directive, and often surprisingly powerful.

Use one-shot when:

  • You have a clear, repeatable format to teach.
  • You want stronger guidance than zero-shot but don't want to bloat the prompt with lots of examples.
  • You're testing how well the model generalizes from a single exemplar.

Why pick one-shot over the others? Short answer: efficiency + specificity. Long answer: models are pattern-matchers; one good pattern often nudges behavior in predictable ways without overwhelming context windows.


Anatomy of a clean one-shot prompt (builds on your grounding practices)

You already learned about structured context blocks, delimiters, and source pinning. Great — now we combine those with a single demonstration.

Key parts:

  1. System or instruction block — highest-level goals (tone, constraints).
  2. Grounding / Context block — facts, pinned sources, timestamps (if needed).
  3. Delimiter — separate the example from other context to prevent leakage.
  4. One-shot example — one input and its expected output, clearly labeled.
  5. New task — the fresh input the model should apply the pattern to.

A few rules of thumb:

  • Always label the example as Example / Demonstration. Models like explicit signage.
  • Use delimiters (e.g., ===CONTEXT===, EXAMPLE ) to avoid context bleeding.
  • Tell the model not to repeat internal context in final outputs unless requested.

Example: Legal clause → Plain-English bullets (with pinned source)

Imagine you want the model to convert dense legal clauses into 2–3 plain-English bullet points. Here's a pragmatic, safe one-shot prompt that builds on your previous grounding work.

SYSTEM: You are a concise legal-summaries assistant. Do not invent facts. If unsure, say "Insufficient information." Use 2-3 bullets, each <= 20 words.

===PINNED SOURCE===
Source: Master_Service_Agreement_v3.pdf (pinned)
Last-updated: 2026-02-01
===END PINNED SOURCE===

===DEMONSTRATION===
Input Clause:
"The Provider shall indemnify and hold harmless the Client from any third-party claims resulting from Provider's negligence, excluding claims arising from Client's gross negligence or willful misconduct."

Desired Output:
- Provider pays for third-party claims caused by Provider negligence.
- Client not covered for claims due to its own gross negligence or willful misconduct.
===END DEMONSTRATION===

NEW INPUT:
"If either party delays delivery beyond 30 days due to force majeure, the other party may suspend performance without termination rights, unless delay exceeds 120 days."

Task: Provide a 2-3 bullet plain-English summary, following the demonstration format. Do not include the pinned source text in your output.

Why this works: the pinned source gives legal context (prevent stale/conflicting facts), delimiters prevent leakage, and the one-shot shows the exact style and brevity you want.


When one-shot fails (and how to fix it)

Common pitfalls:

  • Overfitting to the example: The model parrots structure but misses nuance. Fix: pick an example that includes the edge cases.
  • Ambiguous instruction: If the example doesn't expose a rule, the model guesses. Fix: annotate the example with short comments or constraints.
  • Stale example: If example relies on out-of-date facts, update it or include a timestamp in the pinned source.

Pro tips:

  • If you see the model repeating example-specific words too literally, add: "Generalize—do not reuse specific example wording unless present in new input."
  • If you need stylistic variety, include a label: "Tone: Formal / Friendly" in the system block.

Quick comparison: Zero-, One-, Few-Shot (cheat-sheet)

Mode When to use Pros Cons
Zero-shot When task is high-level or the model already knows the domain Fast; minimal prompt Less predictable; needs strong instruction
One-shot When you need a clear mapping but small prompt Efficient guidance; consistent style May under-specify edge cases
Few-shot When you need robust coverage of variations High reliability across edge cases Larger prompt; costlier; longer to craft

Exercises: Try these prompts and notice the difference

  1. Swap the demonstration to an intentionally poor example and see how output degrades. What changed?
  2. Add a second demonstration and compare results — did it improve reliability? Where did it help most?
  3. Remove the delimiters and test: do you get context leakage (the model echoing internal notes)?

Ask yourself: Is the model learning a rule or just copying phrasing? That detective habit pays off.


Closing: Key takeaways (aka the mic-drop)

  • One-shot is your low-friction teacher. Give one clear example and the model will often replicate the mapping cleanly.
  • Marry one-shot to grounding. Use pinned sources and delimiters to keep facts fresh and prevent leakage — you already know this from "Supplying Context and Grounding."
  • Watch for overfitting. If the model is too literal, tweak the example or add a tiny generalization note.

Remember: the best prompts are experiments. Change one variable (example, delimiter, instruction) at a time and measure. Your next breakthrough is one tiny tweak away — probably the one that makes the model stop sounding like a robot and start sounding like an expert who actually cares.

Final challenge: create a one-shot prompt that teaches the model to turn an email into a 3-part response: summary, action items, tone score (1–5). Pin a relevant policy, include one demo, and see how it performs. Report back with receipts (and a meme).

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics