jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Generative AI: Prompt Engineering Basics
Chapters

1Foundations of Generative AI

2LLM Behavior and Capabilities

3Core Principles of Prompt Engineering

Clarity Over ClevernessSpecificity and ConstraintsUser Intent and Task FramingAudience and Tone ControlContext and GroundingExample-Driven GuidanceOutput Structure and FormattingStepwise Reasoning PromptsVerification and Fact-CheckingControlling RandomnessGuardrails and BoundariesAssumption SurfacingDecomposition Before ExecutionIteration and Refinement CyclesSuccess Criteria Up Front

4Writing Clear, Actionable Instructions

5Roles, Personas, and System Prompts

6Supplying Context and Grounding

7Examples: Zero-, One-, and Few-Shot

8Structuring Outputs and Formats

9Reasoning and Decomposition Techniques

10Iteration, Testing, and Prompt Debugging

11Evaluation, Metrics, and Quality Control

12Safety, Ethics, and Risk Mitigation

13Tools, Functions, and Agentic Workflows

14Retrieval-Augmented Generation (RAG)

15Multimodal and Advanced Prompt Patterns

Courses/Generative AI: Prompt Engineering Basics/Core Principles of Prompt Engineering

Core Principles of Prompt Engineering

24709 views

Adopt guiding principles—clarity, specificity, grounding, and iteration—to consistently steer models toward desired outcomes.

Content

6 of 15

Example-Driven Guidance

Example-Driven, No-Chill Guidance
2228 views
intermediate
humorous
education theory
science
gpt-5-mini
2228 views

Versions:

Example-Driven, No-Chill Guidance

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Example-Driven Guidance for Prompt Engineering (Core Principles Continued)

You already know LLMs are moody, literal, and easily distracted. Now lets teach them to behave like useful interns instead of chaotic fortune-tellers.


Hook: A tiny experiment you can do in 30 seconds

Ask an LLM: "Summarize the article about sustainable urban gardening."

Then ask: "Summarize the article about sustainable urban gardening for a 10-year-old who loves video games, in 3 bullets. Include one practical tip and one common myth. Keep it friendly and cite any claims."

Same task. Wildly different results. Thats the power of prompt engineering, and example-driven prompts are the cheat codes.


What this section is about

This builds on what you learned about context and grounding and audience and tone control, and the prior module on LLM behavior (sensitivity to phrasing, non-determinism, alignment). Here we dive into example-driven guidance: how to craft prompts that use concrete examples, demonstrations, and iterative refinements so the model reliably produces the result you want.

Think of example-driven prompting as teaching by showing, not just telling. Humans learn faster with examples. So do LLMs.


Why examples beat vague instructions

  • Reduces ambiguity. Instead of relying on the model to guess your preferred structure, you give it a target to imitate.
  • Anchors style and format. Demonstrations lock tone, length, and structure more tightly than adjectives like 'concise' or 'funny'.
  • Makes evaluation clearer. When you provide a gold-standard example, you can compare outputs programmatically.

Example-driven prompts are like giving the model a tiny template plus a role model. Its the difference between "make me a sandwich" and "make me a grilled cheese like this picture".


Patterns and templates that work (with examples)

1) Example + instruction + input (the imitate pattern)

  • Pattern:

    1. Provide a short example of desired output for a similar input
    2. Give the new input and ask model to produce the same style
  • Example:

Example output (for input about composting):
- 2-sentence intro
- 3 numbered steps, each 1 sentence
- one myth to debunk at the end

Now do the same for: 'sustainable urban gardening' (article link: [provide link]).

Why it works: model now has a concrete target to copy: structure, brevity, and the myth-debunk slot.


2) Few-shot demonstration for format and tone

  • Pattern: show 2-3 labeled examples with varied tones and then request a new output.

  • Example:

Input: 'Article A' -> Output (for policymakers): concise, formal
Input: 'Article B' -> Output (for teenagers): playful, 3 bullets with emoji
Now: Input: 'Article C' -> Output: like the teenagers example

This is especially powerful for audience control because youre showing exactly how tone maps to structure.


3) Error-correction example (show bad then good)

  • Pattern: show a bad example + corrected good example, then ask to improve a new draft.

  • Example:

Bad summary: too long, vague, no source
Good summary: 50 words, 2 facts with short citations
Now improve this draft: '...'

Why it works: model learns transform operation, not just output style.


Iterative refinement workflow (practical steps)

  1. Define success criteria: format, length, audience, factuality threshold.
  2. Write a first prompt using one of the patterns above.
  3. Run the model at a few settings (temperature low for deterministic; higher for creative).
  4. Compare outputs to example(s). Note consistent errors.
  5. Add a corrective example or constraint and rerun. Repeat until metrics satisfied.

Questions to ask while iterating:

  • Is it hallucinating facts? Add grounding and ask for citations.
  • Is the tone off? Drop in a more specific example of tone.
  • Too verbose? Provide length-limited example.

Quick reference table: bad prompt vs example-driven prompt

Problem Bad prompt Example-driven fix
Vague format 'Summarize article' Provide example summary, ask to match style and length
Wrong audience 'Explain this' Give an example for the target audience and ask to emulate
Hallucinations 'List facts' Give an example item with citation format and ask to cite sources

Concrete iterative example: converting a research abstract into a press release

  1. Bad prompt:
Write a press release for this abstract.

Result: generic, mismatched tone.

  1. Example-driven prompt:
Example press release for study X:
- 1-sentence hook
- 2 short paragraphs for findings
- quote from lead author
Now, using the same format and tone, write a press release for this abstract: [paste abstract]. Limit to 200 words. Include one simplified statistic and one quote attributed to the first author.

Result: predictable structure, correct tone, and easier evaluation.


Tips, traps, and pro moves

  • Use counter-examples: show both what you want and what you dont want.
  • Anchor with grounding: paste facts, data, or URLs in the prompt to reduce hallucination.
  • Control randomness: set temperature low for reproducibility; sample at different temps for variety when exploring.
  • Keep few-shot examples short and focused; too many examples can confuse the model.
  • Programmatic testing: generate 50 outputs and compute simple metrics like average length, keyword presence, and citation format.

Pro tip: For alignment-sensitive tasks, include an example where the model refuses politely when asked to do something unsafe, then ask it to follow that refusal behaviour.


Closing: how this ties back to earlier lessons

You already learned that LLMs are sensitive to phrasing, non-deterministic, and need grounding. Example-driven prompting takes those problems and turns them into tools: specificity reduces sensitivity, examples reduce non-determinism, and grounding examples reduce hallucination.

Key takeaways:

  • Examples are the fastest way to teach a model your preferences.
  • Combine examples with grounding, audience control, and iterative testing for reliable outputs.
  • Measure, iterate, and be explicit: models are obedient mimics, not mind-readers.

Go try it: pick a mundane task you do every week and create a one-example prompt that makes the model do it right. If it still messes up, add a corrective example and try again. Repeat until your virtual intern behaves.


Version note: this is the continuation of core principles; for more on grounding and audience templates revisit positions 5 and 4 respectively.

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics