Writing Clear, Actionable Instructions
Craft precise directives with scope, constraints, and acceptance criteria that remove ambiguity and reduce rework.
Content
State Acceptance Criteria
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
State Acceptance Criteria — because "close enough" is not a grade
"If you can't say how to fail, you can't say how to succeed." — a brutally honest prompt engineer (probably me)
You're already working with the good stuff: strong action verbs (we're telling the model to DO, not vaguely 'help') and scope & boundaries (we've put the playground fences up). Now, we need the referee: Acceptance Criteria — the explicit, testable rules that define what 'done' looks like.
Why this matters:
- It turns fuzzy intentions into measurable outcomes.
- It saves time by preventing endless revisions.
- It makes iteration precise: you can say "this output failed criterion 2" instead of "it’s not quite right."
This builds directly on the Core Principles — clarity, specificity, grounding, and iteration — by converting intent into objective checks you can validate quickly.
What are Acceptance Criteria (and no, they’re not optional)
Acceptance criteria are short, explicit statements that an output must satisfy to be considered correct. They are the pass/fail tests for your prompt.
- Clarity: Each criterion should be unambiguous.
- Specificity: Prefer measurable or verifiable conditions over impressions.
- Grounding: Use concrete examples, formats, or constraints the model can follow.
- Iterative-friendly: Easy to tweak and version when an outcome misses the mark.
Think of them as the checklist a QA tester uses — but for outputs from a neural network.
How to write acceptance criteria (step-by-step)
- Start from the goal (what you want the user to get).
- Translate goals into observable and measurable statements.
- Use formatting constraints (JSON, bullet list, 3 bullets, <= 150 words) whenever possible.
- Include content constraints (must mention X, avoid Y, cite sources, use present tense).
- Add quality constraints (tone: neutral, reading level: grade 8, accuracy: cite sources for facts).
- Prioritize: list must-pass items first; nice-to-have items last.
Quick template
Output must: (1) [format]; (2) [content requirements]; (3) [quality/tone]; (4) [length/time constraints].
Example: "Output must be a 3-bullet summary (format); include the three main causes (content); be neutral in tone and cite sources (quality); ≤120 words (length)."
Examples: From vague to actionable
Bad prompt fragment (no acceptance criteria)
Summarize the article on climate policy.
Result: The model might produce 1 paragraph, 5 paragraphs, a tweet, or a rant about weather.
Good prompt + acceptance criteria
Summarize the provided article on climate policy.
Acceptance criteria:
- Output must be exactly 3 bullets.
- Each bullet ≤ 20 words.
- Must explicitly state the main policy proposed, the target population, and the expected timeline.
- No opinions or additional commentary.
- Include up to 2 inline citations in parentheses (e.g., (Smith, 2020)).
Result: You get a standardized, verifiable output every time.
A table: Vague vs. Proper acceptance criteria
| Vague expectation | Good acceptance criterion |
|---|---|
| "Be concise" | "≤ 120 words, max 3 sentences per section" |
| "Make it friendly" | "Tone: conversational; use 1st or 2nd person; avoid slang" |
| "Cite sources" | "Cite at least 2 sources with URLs or (Author, Year) format" |
| "Give options" | "Provide 4 distinct options, each with 1-sentence pros/cons" |
Mini-checklist: Types of acceptance criteria (use at least 2–3)
- Format: (JSON, markdown, bullet list, table)
- Structure: (intro, 3 items, conclusion)
- Content inclusion: (must mention X, must not mention Y)
- Style/tone: (formal, neutral, humorous)
- Length limits: (word count, sentence count)
- Accuracy: (provide sources, date ranges, no hallucinated facts)
- Safety: (avoid medical/legal advice; include disclaimers)
- Verifiability: (use numbers/dates/references)
Example: Full prompt that uses all our learned moves
You are a concise explainer. Summarize the attached policy memo.
Scope: Focus only on the policy recommendations (do not summarize background statistics).
Action: Provide a 4-bullet list (Choose Strong Action Verbs: Provide, List, State).
Acceptance criteria:
- Output must be a Markdown bullet list of 4 items.
- Each item must start with an action verb and be ≤ 18 words.
- Each item must include the policy name and the intended target group.
- Do not include analysis, evaluation, or implementation steps.
- Total output length must be ≤ 80 words.
See how scope, verbs, and acceptance criteria compose a crisp, testable instruction? That’s prompt engineering synergy.
Automating checks (a tiny validator you can copy)
Here’s a pseudocode check you can run on outputs to automatically flag failures:
function validate(output):
tests = []
tests.append(len(words(output)) <= 80)
tests.append(lines(output) == 4)
for line in lines(output):
tests.append(starts_with_action_verb(line))
tests.append(word_count(line) <= 18)
tests.append(contains_policy_and_target(line))
return all(tests)
This is the literal difference between asking for "a summary" and getting reliably repeatable summaries.
Common pitfalls & how to avoid them
- Pitfall: Acceptance criteria that are subjective ("make it compelling").
- Fix: Replace with measurable proxies ("include 1 statistic and 1 quote").
- Pitfall: Too many criteria — paralyzing the model.
- Fix: Identify top 3 must-haves; tuck extras into a "nice-to-have" list.
- Pitfall: Conflicting criteria (e.g., "be thorough" + "≤50 words").
- Fix: Resolve the conflict or turn one into a priority with rankings.
Closing: Tiny ritual to adopt when crafting prompts
Before you hit send, ask yourself three quick questions:
- Can I check this automatically? (If not, add measurable criteria.)
- Which 3 things MUST be true for me to accept this output?
- Which 1-2 things are "nice-to-have" and can be optional?
Do this every time and watch your prompt revision count drop like it finally found a comfortable chair.
Final thought: Acceptance criteria are honesty in action. They force you to translate fuzzy hopes into concrete instructions the model — and your teammates — can actually work with.
Summary — your shorthand:
- State explicit, measurable acceptance criteria.
- Use format + content + quality constraints.
- Automate checks where possible.
- Keep it concise: prioritize must-haves, separate nice-to-haves.
Go forth and make "done" mean something real.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!