Core Principles of Prompt Engineering
Adopt guiding principles—clarity, specificity, grounding, and iteration—to consistently steer models toward desired outcomes.
Content
Specificity and Constraints
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Specificity and Constraints — Make Your Prompt a Laser, Not a Room Full of Lasers
Want reliable output from an LLM? Stop being mysterious. Be a drill sergeant with kindness.
You already learned about Clarity Over Cleverness (Position 1) — the world where being precise beats being poetic — and how models are fickle: sensitive to phrasing, non-deterministic, and suspiciously confident when wrong (see 'When Models Say 'I Don’t Know'' and 'Domain Transfer and Generalization'). This lesson builds on that: specificity and constraints are your main tools for turning noisy model behavior into predictable, useful results.
Why specificity matters (and why constraints are your friend)
- Specificity tells the model exactly what you want. Less guesswork = less hallucination, less unexpected style, fewer 'I meant to do that' outputs.
- Constraints limit the model's freedom: format, length, style, forbidden content, required fields, allowed sources. Constraints make evaluation easier and outputs more automatable.
Think of an LLM like a brilliant improv actor who gets stage fright if you only say 'play a scene'. If you say 'play a 30-second courtroom monologue in plain language, with a one-sentence summary at the end', they deliver something you can grade.
Types of specificity and constraints (the toolbelt)
- Task specificity: What is the exact action? (summarize, translate, classify, extract)
- Output format constraints: JSON, CSV, bullet list, strict template
- Content constraints: word limits, forbidden terms, mandatory fields
- Style constraints: tone, reading level, persona
- Process constraints: step-by-step reasoning, chain-of-thought, or no internal reasoning
- Domain constraints: stay within this domain or cite sources when crossing domains
Quick matrix (when to use what)
| Goal | Use specificity | Use constraints |
|---|---|---|
| Automatable output | High | Strict format (JSON) |
| Creative copy | Medium | Style + length constraints |
| Safety-critical tasks | Very high | Content/ethical constraints + verification |
| Transfer to new domain | High | Domain constraints + grounding examples |
Examples: vague vs specific prompts (yes, the difference is dramatic)
Bad (vague)
Write a summary of this article.
Result: an existential essay about articles and maybe a haiku. Not helpful.
Good (specific + constraints)
Task: Summarize the article text provided between <START> and <END>.
Output: JSON with keys: title (string), bullets (3 items max, concise), key_insight (one sentence), length_words (integer).
Tone: neutral, professional.
Max tokens: 150.
<START>
[article text here]
<END>
Result: machine-parseable JSON you can plug into an app. Joy.
Prompting patterns that enforce constraints
- Use explicit headings in the prompt: Task, Output, Format, Examples.
- Provide a template: models love to copy examples. If you want JSON, show JSON.
- Use delimiters for user-supplied content:
... to avoid prompt bleeding. - Use negative constraints: "Do not include X" or "Avoid stating opinions".
- Include a short verification step: "If you cannot answer, reply with 'I DON'T KNOW' and nothing else." (Relevant to Position 15.)
Example: template + guardrail
System: You are an assistant that returns ONLY valid JSON matching the schema.
User: Convert the text below to the schema.
Schema: {"name": "string", "summary": "string (<=40 words)", "topics": ["string"]}
Text: <START> ... <END>
If you cannot fill a field, use null. If you cannot complete, output: {"error": "I DON'T KNOW"}.
This folds in the lesson about model confidence: force a safe 'I DON’T KNOW' behavior instead of hallucinations.
Trade-offs and pitfalls
- Over-constraining: If you require excessive detail (e.g., exact phrasing for every key), you may stifle flexible, useful output or push the model to output invalid JSON. Start strict; loosen iteratively.
- Under-specifying: The model fills in blanks with guesses. If you care about provenance or safety, under-specifying is basically an invitation to hallucinate.
- Ambiguous constraints: "Short" vs "brief"? Define numbers. "Formal" vs "academic"? Give examples.
- Non-determinism: Even with specificity, outputs can vary. Use temperature, seed, or reranking to control variance.
Debugging prompts: a mini-checklist
- Can a human follow this prompt and produce the desired output in one pass? If no, clarify.
- Have you provided an explicit format or template? If not, add one.
- Did you include negative constraints for harmful or irrelevant output? If not, add them.
- Is the prompt brittle to small rewording? If yes, reduce ambiguity.
- Test with edge cases and out-of-domain inputs (recall domain transfer concerns from Position 14).
Rapid recipes (copy-paste friendly)
- JSON output template:
Task: <what>
Output: JSON only. Schema: {"field1": "string", "list": ["string"]}
Examples:
{"field1": "Example", "list": ["a","b"]}
Content:
<START>
...
<END>
- Safety-first reply:
If unsure, respond: 'I DON'T KNOW'. Do not guess. Provide sources if available.
- Batching multiple constraints:
Provide: (1) 30-word summary; (2) 3 headline options; (3) one tweetable sentence. Do not exceed 40 words for any output part.
Closing: TL;DR with teeth
- Be specific about the task, output format, and constraints. Specificity reduces hallucination and increases automability.
- Use templates and examples — the model copies patterns; this is a feature, not cheating.
- Balance: start with strict constraints, relax as you iterate. Measure variance and set randomness accordingly.
- Fail-safe: instruct the model to say 'I DON’T KNOW' when appropriate — leverage prior lessons on model confidence.
Final chef's kiss: a good prompt is like a well-written grocery list. You don’t ask for "food". You ask for "two ripe avocados, diced; 1 small red onion, minced; 1 lime, juiced" — and then you get guacamole, not a confused trip to the farmer's market.
Version note: this lesson builds directly from 'Clarity Over Cleverness' and the behavior lessons on non-determinism, domain transfer, and safe refusal. Apply these specificity patterns, iterate, and watch chaos become workflows.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!