jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Generative AI: Prompt Engineering Basics
Chapters

1Foundations of Generative AI

2LLM Behavior and Capabilities

3Core Principles of Prompt Engineering

4Writing Clear, Actionable Instructions

Choose Strong Action VerbsDefine Scope and BoundariesState Acceptance CriteriaInclude Constraints and LimitsNumbered Steps and ChecklistsAvoid Ambiguity and Vague TermsUse Negative Prompts SparinglyDisclose Time and ContextDomain Vocabulary and GlossariesReference Styles and CitationsMulti-Task Prompt PatternsQuestion Framing TechniquesBrevity vs CompletenessHints and Nudge StrategiesAvoid Leading the Model

5Roles, Personas, and System Prompts

6Supplying Context and Grounding

7Examples: Zero-, One-, and Few-Shot

8Structuring Outputs and Formats

9Reasoning and Decomposition Techniques

10Iteration, Testing, and Prompt Debugging

11Evaluation, Metrics, and Quality Control

12Safety, Ethics, and Risk Mitigation

13Tools, Functions, and Agentic Workflows

14Retrieval-Augmented Generation (RAG)

15Multimodal and Advanced Prompt Patterns

Courses/Generative AI: Prompt Engineering Basics/Writing Clear, Actionable Instructions

Writing Clear, Actionable Instructions

29999 views

Craft precise directives with scope, constraints, and acceptance criteria that remove ambiguity and reduce rework.

Content

3 of 15

State Acceptance Criteria

Acceptance Criteria: No Ambiguity, All Results
5813 views
beginner
humorous
education theory
visual
gpt-5-mini
5813 views

Versions:

Acceptance Criteria: No Ambiguity, All Results

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

State Acceptance Criteria — because "close enough" is not a grade

"If you can't say how to fail, you can't say how to succeed." — a brutally honest prompt engineer (probably me)

You're already working with the good stuff: strong action verbs (we're telling the model to DO, not vaguely 'help') and scope & boundaries (we've put the playground fences up). Now, we need the referee: Acceptance Criteria — the explicit, testable rules that define what 'done' looks like.

Why this matters:

  • It turns fuzzy intentions into measurable outcomes.
  • It saves time by preventing endless revisions.
  • It makes iteration precise: you can say "this output failed criterion 2" instead of "it’s not quite right."

This builds directly on the Core Principles — clarity, specificity, grounding, and iteration — by converting intent into objective checks you can validate quickly.


What are Acceptance Criteria (and no, they’re not optional)

Acceptance criteria are short, explicit statements that an output must satisfy to be considered correct. They are the pass/fail tests for your prompt.

  • Clarity: Each criterion should be unambiguous.
  • Specificity: Prefer measurable or verifiable conditions over impressions.
  • Grounding: Use concrete examples, formats, or constraints the model can follow.
  • Iterative-friendly: Easy to tweak and version when an outcome misses the mark.

Think of them as the checklist a QA tester uses — but for outputs from a neural network.


How to write acceptance criteria (step-by-step)

  1. Start from the goal (what you want the user to get).
  2. Translate goals into observable and measurable statements.
  3. Use formatting constraints (JSON, bullet list, 3 bullets, <= 150 words) whenever possible.
  4. Include content constraints (must mention X, avoid Y, cite sources, use present tense).
  5. Add quality constraints (tone: neutral, reading level: grade 8, accuracy: cite sources for facts).
  6. Prioritize: list must-pass items first; nice-to-have items last.

Quick template

Output must: (1) [format]; (2) [content requirements]; (3) [quality/tone]; (4) [length/time constraints].

Example: "Output must be a 3-bullet summary (format); include the three main causes (content); be neutral in tone and cite sources (quality); ≤120 words (length)."


Examples: From vague to actionable

Bad prompt fragment (no acceptance criteria)

Summarize the article on climate policy.

Result: The model might produce 1 paragraph, 5 paragraphs, a tweet, or a rant about weather.

Good prompt + acceptance criteria

Summarize the provided article on climate policy.

Acceptance criteria:

  • Output must be exactly 3 bullets.
  • Each bullet ≤ 20 words.
  • Must explicitly state the main policy proposed, the target population, and the expected timeline.
  • No opinions or additional commentary.
  • Include up to 2 inline citations in parentheses (e.g., (Smith, 2020)).

Result: You get a standardized, verifiable output every time.


A table: Vague vs. Proper acceptance criteria

Vague expectation Good acceptance criterion
"Be concise" "≤ 120 words, max 3 sentences per section"
"Make it friendly" "Tone: conversational; use 1st or 2nd person; avoid slang"
"Cite sources" "Cite at least 2 sources with URLs or (Author, Year) format"
"Give options" "Provide 4 distinct options, each with 1-sentence pros/cons"

Mini-checklist: Types of acceptance criteria (use at least 2–3)

  • Format: (JSON, markdown, bullet list, table)
  • Structure: (intro, 3 items, conclusion)
  • Content inclusion: (must mention X, must not mention Y)
  • Style/tone: (formal, neutral, humorous)
  • Length limits: (word count, sentence count)
  • Accuracy: (provide sources, date ranges, no hallucinated facts)
  • Safety: (avoid medical/legal advice; include disclaimers)
  • Verifiability: (use numbers/dates/references)

Example: Full prompt that uses all our learned moves

You are a concise explainer. Summarize the attached policy memo.

Scope: Focus only on the policy recommendations (do not summarize background statistics).

Action: Provide a 4-bullet list (Choose Strong Action Verbs: Provide, List, State).

Acceptance criteria:

  • Output must be a Markdown bullet list of 4 items.
  • Each item must start with an action verb and be ≤ 18 words.
  • Each item must include the policy name and the intended target group.
  • Do not include analysis, evaluation, or implementation steps.
  • Total output length must be ≤ 80 words.

See how scope, verbs, and acceptance criteria compose a crisp, testable instruction? That’s prompt engineering synergy.


Automating checks (a tiny validator you can copy)

Here’s a pseudocode check you can run on outputs to automatically flag failures:

function validate(output):
    tests = []
    tests.append(len(words(output)) <= 80)
    tests.append(lines(output) == 4)
    for line in lines(output):
        tests.append(starts_with_action_verb(line))
        tests.append(word_count(line) <= 18)
        tests.append(contains_policy_and_target(line))
    return all(tests)

This is the literal difference between asking for "a summary" and getting reliably repeatable summaries.


Common pitfalls & how to avoid them

  • Pitfall: Acceptance criteria that are subjective ("make it compelling").
    • Fix: Replace with measurable proxies ("include 1 statistic and 1 quote").
  • Pitfall: Too many criteria — paralyzing the model.
    • Fix: Identify top 3 must-haves; tuck extras into a "nice-to-have" list.
  • Pitfall: Conflicting criteria (e.g., "be thorough" + "≤50 words").
    • Fix: Resolve the conflict or turn one into a priority with rankings.

Closing: Tiny ritual to adopt when crafting prompts

Before you hit send, ask yourself three quick questions:

  1. Can I check this automatically? (If not, add measurable criteria.)
  2. Which 3 things MUST be true for me to accept this output?
  3. Which 1-2 things are "nice-to-have" and can be optional?

Do this every time and watch your prompt revision count drop like it finally found a comfortable chair.

Final thought: Acceptance criteria are honesty in action. They force you to translate fuzzy hopes into concrete instructions the model — and your teammates — can actually work with.


Summary — your shorthand:

  • State explicit, measurable acceptance criteria.
  • Use format + content + quality constraints.
  • Automate checks where possible.
  • Keep it concise: prioritize must-haves, separate nice-to-haves.

Go forth and make "done" mean something real.

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics