jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Generative AI: Prompt Engineering Basics
Chapters

1Foundations of Generative AI

2LLM Behavior and Capabilities

3Core Principles of Prompt Engineering

4Writing Clear, Actionable Instructions

5Roles, Personas, and System Prompts

6Supplying Context and Grounding

7Examples: Zero-, One-, and Few-Shot

When to Use Zero-ShotOne-Shot DemonstrationsFew-Shot Prompt PatternsSelecting Quality ExemplarsCounterexamples for BoundariesOrder and Primacy EffectsFormatting Exemplars CleanlyLabel and Schema ConsistencyDifficulty Gradient DesignInput–Output Pair CraftingContrastive Example PairsMinimal Pair ConstructionAvoiding Selection BiasDecision Boundary IllustrationKnowing When to Skip Examples

8Structuring Outputs and Formats

9Reasoning and Decomposition Techniques

10Iteration, Testing, and Prompt Debugging

11Evaluation, Metrics, and Quality Control

12Safety, Ethics, and Risk Mitigation

13Tools, Functions, and Agentic Workflows

14Retrieval-Augmented Generation (RAG)

15Multimodal and Advanced Prompt Patterns

Courses/Generative AI: Prompt Engineering Basics/Examples: Zero-, One-, and Few-Shot

Examples: Zero-, One-, and Few-Shot

22847 views

Use demonstrations to steer behavior, balancing exemplar quality, order effects, and when to skip examples entirely.

Content

3 of 15

Few-Shot Prompt Patterns

Few-Shot Patterns: The Gigantic Handshake
2740 views
intermediate
humorous
education theory
visual
gpt-5-mini
2740 views

Versions:

Few-Shot Patterns: The Gigantic Handshake

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Few-Shot Prompt Patterns — The Gigantic Handshake You Give the Model

"If zero-shot is shouting instructions across the room and one-shot is handing a sticky note, few-shot is inviting the model to sit down, have tea, and watch you do the thing three times." — Your overly caffeinated TA

You already know when zero-shot is neat (quick, general tasks) and how one-shot demonstrations work (a single exemplar to nudge behavior). You also learned how to supply context and ground facts using structured blocks and delimiters. Few-shot is the next move: you give the model multiple carefully chosen demonstrations so it generalizes the pattern better — especially when the task is nuanced or the output format is strict.


TL;DR (but with style)

  • Few-shot = multiple exemplars (2–10 usually) to teach the model a pattern. Think of it as showing, not just telling.
  • Use clear I/O pairs, consistent formatting, and delimiters (we covered that in Supplying Context and Grounding) to make the model copy the pattern.
  • Patterns are recipes: pick one that matches your task complexity, examples availability, and failure mode.

Why use few-shot? When one-shot stumbles

Imagine you taught someone to fold a complex origami crane with only one demo. Great if they’re close to your skill level. But if the steps are subtle, or you want them to generalize to different paper sizes, show a few variants. That's what few-shot does for models:

  • Reduces ambiguity in instruction interpretation
  • Encodes edge cases and formatting rules
  • Nudges model toward consistent style and error modes you prefer

Ask yourself: "Is my task brittle to a single example? Do I care about output format? Are there multiple subcases?" If yes, go few-shot.


Few-Shot Prompt Patterns (with spicy examples)

Below are reliable patterns you can paste into your prompt cookbook.

1) Input–Output Pairing (The Classic)

Show several direct examples of input and expected output. Use identical delimiters.

Example (3-shot sentiment normalization):

Context: Convert user text to one of: Positive, Negative, Neutral.

### Example 1
Input: "Loved the movie, but the ending was meh." 
Output: Positive

### Example 2
Input: "It was terrible — I walked out." 
Output: Negative

### Example 3
Input: "It’s okay, some parts were cool." 
Output: Neutral

### Now you:
Input: "The acting was great but the plot was confusing."
Output:

Why it works: explicit pairs, limited noise, consistent labels.


2) Role-Based Demonstrations (Acting Class)

Assign roles ("Act like an editor") and show examples in that persona.

Example snippet:

You are an "Economy Tweet Rewriter" — concise, witty, <280 chars.

Example 1
Original: "The GDP dropped slightly this quarter due to supply chain issues."
Rewrite: "GDP dipped — blame the supply chain kinks."

Example 2
Original: "Unemployment eased as jobs were added in services."
Rewrite: "Unemployment softened as service jobs trickled in."

Now rewrite: ...

Use when style + task both matter.


3) Chain-of-Thought (CoT) Exemplars

Show step-by-step reasoning before the final answer to teach the model to break down problems.

Use sparingly (and only if allowed by your policy/usage). Example for arithmetic logic:

Q: If A = 3x and B = A + 5, find B when x = 2.
Thought: A = 3*2 = 6. B = 6 + 5 = 11.
Answer: 11

Q: ...

Why: Helps on multi-step tasks but increases token cost.


4) Contrastive Examples (Show Good vs Bad)

Present a positive exemplar and a negative exemplar with corrections.

Example:

Bad: "Summary: good movie."
Why bad: Too vague, missing specifics.
Correct: "Summary: A tight thriller that balances suspense with an emotional climax."

This is great when common mistakes exist and you want to steer the model away.


5) Template + Variables (The Fill-in-the-Blank)

Create a template with placeholders and show filled templates.

Example:

Template: "{Product} is {adjective} because {reason}."
Example: "The app is intuitive because the onboarding is short." 
Example: "The jacket is warm because of its insulated lining." 
Now: "Headphones are..."

Use when outputs must fit a rigid syntactic mold.


6) Progressive (Curriculum) Shots

Start with simple examples, then progressively harder ones. Models generalize better if they see a skill ladder.

7) Error-First / Correction Shots

Show the model a common wrong attempt and the corrected version. Useful for editing and debugging tasks.


Quick Recipe: Crafting a Few-Shot Prompt (5 steps)

  1. Pick 3–7 high-quality exemplars. Diversity but not contradiction.
  2. Keep formatting identical: delimiters, labels, spacing. Remember: the model loves consistency.
  3. Add a short instruction and any grounding/context block (use the same structured context techniques from "Supplying Context and Grounding").
  4. If format matters, include an explicit output schema or example output only.
  5. Test, iterate, and swap in contrastive examples for stubborn errors.

Pattern Comparison (cheat-sheet)

Pattern Best for Pro / Con
I/O Pairs Classification, formatting + Simple, reliable / - needs good examples
Role-based Style-sensitive tasks + Aligns voice / - fragile if role ambiguous
Chain-of-Thought Multi-step reasoning + Improves logic / - verbose, costlier
Contrastive Avoiding specific errors + Teaches avoidance / - needs curated mistakes
Template Rigid formats + Ensures structure / - less creative

Example: Grounded Few-Shot for Extraction (combining patterns)

We want to extract "company, revenue" from text and enforce JSON output.

[CONTEXT: Use only the article’s data. If missing, return null. Source: article_2026_03_01]

Example 1
Text: "Acme Corp reported $5M revenue in Q4."
Output: {"company":"Acme Corp","revenue":"$5M"}

Example 2
Text: "Beta LLC saw revenues of $12 million last year." 
Output: {"company":"Beta LLC","revenue":"$12 million"}

Example 3
Text: "No revenue figures were disclosed by Gamma Inc."
Output: {"company":"Gamma Inc","revenue":null}

Now extract from:
Text: "Delta Co announced quarterly revenue of $3,200,000 and expects growth."
Output:

Note: we used a context block at top (like you learned earlier) and explicit JSON outputs to lock the format.


Final Notes (aka the mic drop)

Few-shot is a power move: it gives the model practice examples so it imitates your desired behavior. Use consistent delimiters and formatting (remember: feed the model the right facts at the right time). When deciding between zero-, one-, and few-shot, ask: "How many examples does this pattern need before it stops tripping over edge cases?" If more than one — few-shot.

"Teach, don't yell. Few-shot is sitting down and demonstrating like a responsible mentor — with coffee and sticky notes."

Key takeaways:

  • Use patterns (I/O, role-based, CoT, contrastive) purposely.
  • Keep examples tight, consistent, and grounded.
  • Iterate: swap in new exemplars when you see systematic errors.

Go forth and prompt like you’re giving a masterclass. The model won’t learn origami, but it can learn consistency.

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics