Writing Clear, Actionable Instructions
Craft precise directives with scope, constraints, and acceptance criteria that remove ambiguity and reduce rework.
Content
Include Constraints and Limits
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Include Constraints and Limits — The Prompt’s Seatbelts
"Constraints are not prison bars; they're the lanes on the highway that keep your output from joyriding into nonsense."
You already nailed Define Scope and Boundaries and State Acceptance Criteria — 🎯 now we add the guardrails that keep LLMs honest and useful: constraints and limits. If scope says what we're doing and acceptance criteria says how we’ll judge success, constraints tell the model how to do it — the little rules that prevent creative chaos.
Why constraints matter (and fast)
- Models have freedom. Freedom is great for art, terrible for reproducible tasks.
- Constraints reduce ambiguity, limit hallucination surfaces, and produce outputs you can parse, validate, or drop straight into a pipeline.
- They operationalize the guiding principles from "Core Principles of Prompt Engineering": clarity and specificity in action.
Think of scope as the map, acceptance criteria as the destination, and constraints as the road signs.
Types of useful constraints (with real-world analogies)
| Constraint Type | What it does | Analogy |
|---|---|---|
| Length / token limit | Caps verbosity (e.g., ≤ 150 words) | Bite-sized snack vs buffet |
| Output format | Forces JSON, CSV, Markdown | A recipe rather than improv jazz |
| Style / tone | Business, playful, somber | Dress code for the text |
| Content restrictions | No personal data, no legal advice | "No peanut allergy" on the menu |
| Enumerative constraints | Exactly N items, numbered | "Top 5" list requirement |
| Time-window / source constraints | Only cite post-2020 sources | "Use only fresh produce" |
| Confidence / fallback behavior | If >60% uncertain, say "I don't know" | Choose honesty over guessing |
| Forbidden patterns | No HTML, no external links | Bouncers at the club entrance |
How to write constraints that actually work — patterns that win
- Be explicit and machine-friendly. Instead of "Keep it short," say "Maximum 120 words."
- Use a strict output format. JSON or CSV is your friend for deterministic parsing. Example: ask the model to return a JSON object with named keys and types.
- Combine negative and positive constraints. Tell the model what to do and what not to do.
- Add fallback instructions. If the model can’t meet acceptance criteria, tell it how to respond (e.g., provide partial results and a reason).
- State priority order. When constraints conflict, define which rule wins.
Before / After: Constraint makeover
Bad prompt (vague):
Write a summary of electric cars.
Good prompt (constrained):
Summarize the environmental benefits of electric cars in **≤ 120 words**, in **3 bullet points**, each no more than **30 words**. Use **neutral business tone**. Do **not** include sales language or brand names. If uncertain about a claim, end that bullet with "(uncertain)".
Why the good one wins: It's measurable (120 words), structured (3 bullets), style-limited (neutral), and includes a fallback for uncertainty.
Example: Enforcing a JSON schema (workhorse pattern)
Ask for this exact output — machines love exactness.
Task: Generate 4 recommended titles for a how-to article about time management for students.
Constraints:
- Return JSON array called "titles" with exactly 4 string elements.
- Each title must be ≤ 10 words and use title case.
- No emojis or punctuation at the end.
- Do not include explanations.
Expected output example:
{"titles": ["Title One", "Title Two", "Title Three", "Title Four"]}
This makes parsing trivial and reduces hallucinated commentary.
Common pitfalls and how to dodge them
- Over-constraining: Requiring too many rigid rules can make the model fail. If the model can't comply, it may invent data or truncate. Fix: prioritize constraints and allow graceful degradation.
- Vague numeric constraints: "Short" vs "≤ 50 tokens." Always prefer explicit numbers.
- Implicit assumptions: Avoid leaving details only in your head (locale, date format, units). Write them down.
- No fallback: If the model can't meet a constraint, tell it what to output instead (e.g., partial JSON + error reason).
Constraint checklist — run this before you send the prompt
- Have I defined maximum/minimum length? (tokens, words, characters)
- Have I specified an exact output format (JSON, CSV, Markdown)?
- Have I constrained tone and jargon to an audience level? (e.g., "undergraduate-level English")
- Have I included forbidden items and allowed sources? (e.g., "no Wikipedia" or "only peer-reviewed 2018-2023")
- Did I include a fallback if the model is uncertain? (e.g., output partial + "INSUFFICIENT_DATA")
- Did I prioritize constraints if they might conflict?
Advanced tricks (for when you want power without chaos)
- Constraint cascading: First ask for a short summary (≤50 words). Then ask the model to expand each sentence into 2–3 bullets in a second step. This keeps initial scope narrow and verifiable.
- Self-check phase: Tell the model to validate its own output against constraints and append a boolean
compliant: true/falseandreasonslist. - Constraint templates: Reuse templates for common tasks. For example, a
TEMPLATE_REPORT_V1always returns{"title":string, "summary":string, "compliance": {"format":bool, "length":bool}}so downstream systems can auto-accept or flag outputs.
Example: Prompt + Self-Check (complete)
Task: Provide 3 action-oriented study tips for freshmen. Constraints:
- Return JSON with keys: "tips" (array of 3 strings), "compliance" (object).
- Each tip ≤ 20 words.
- Tone: encouraging, not condescending.
- If any tip may be inaccurate, append "(uncertain)".
- Self-check: set "compliance": {"format": true/false, "length": true/false} and list reasons if false.
Expected output format example:
{
"tips": ["Tip one", "Tip two", "Tip three"],
"compliance": {"format": true, "length": true}
}
This gives you an output plus machine-readable validation.
Closing: A tiny philosophy on limits
Constraints are not punishment — they're clarity incarnate. They let models do what we need reliably, not just creatively. When paired with clearly stated scope and acceptance criteria (remember those neighbors from earlier?), constraints turn AI from a delightful wildcard into a predictable tool.
Try this small exercise: take one of your old prompts and add three concrete constraints (length, format, fallback). Run it. If the result is better, you've just leveled up in prompt engineering.
Key takeaway: Be explicit, be parsable, and be forgiving. Tell the model the lane it should drive in — and what to do if it can’t make the turn.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!