Reasoning and Decomposition Techniques
Elicit better thinking with outline-first strategies, hypothesis testing, and verification-first prompting.
Content
Self-Ask and Subquestioning
Versions:
Watch & Learn
AI-discovered learning video
Sign in to watch the learning video for this topic.
Self-Ask and Subquestioning — The Tiny Interrogator That Solves Big Problems
"If you can't ask the right small questions, you won't get the right big answers." — Probably me, three minutes ago, wearing a cape.
You're already familiar with: scratchpads and notes fields (where the model keeps its messy thinking) and rationale-lite (short, useful justifications instead of a novella of chain-of-thought). Now we move on to the surgical tool in that toolkit: Self-Ask and Subquestioning — a disciplined way to have the model interrogate itself, break problems into bite-sized queries, and stitch a reliable, scorable output back together in a structured schema (yes — tying into Structuring Outputs and Formats).
What is Self-Ask (quick definition)
Self-Ask is a decomposition strategy where a model explicitly generates subquestions about a main task, answers each subquestion (often using notes/scratchpad), and then composes those answers into a final response that matches a specified output schema. Think of it as turning a messy task into a to-do list of tiny checks, each one easy to verify.
Why it matters: It improves accuracy, makes hallucinations easier to catch, and produces modular outputs that are easier to parse, score, and reuse downstream — which is exactly what we care about after learning how to enforce output schemas.
How it fits with what you've learned
- Scratchpad/Notes Field: Use it as the workspace for the model’s subquestions and intermediate answers. Keep this separate from the final answer.
- Rationale-Lite: Keep each answer or justification short and factual. No epic monologues — just the facts needed to trust the step.
- Structuring Outputs and Formats: Define the schema for the final composed answer from the start, and instruct the model to fill that schema using the verified subanswers.
A step-by-step recipe (scannable)
- Define the task and the final output schema you want (JSON, table, bullet list, etc.).
- Ask the model to generate subquestions that, if answered, would fully solve the task. Keep them atomic and verifiable.
- Have the model answer each subquestion in the notes/scratchpad with rationale-lite justification and sources if needed.
- Run a short self-check pass: verify completeness, check contradictions, and highlight uncertain items.
- Compose the final output strictly according to the schema, using only verified subanswers.
- Include a short confidence score and a trace linking final fields to subquestion IDs (for auditability).
Prompt pattern (practical template)
Use this as a base for your prompts. Replace the placeholders.
Task: [Describe the main task]
Output schema: [Describe JSON/table format to be returned]
Instructions:
1) Generate a sequential list of subquestions needed to solve the Task. Number them.
2) For each subquestion, provide a short answer (2-3 sentences) in the Notes section, with any source or calculation.
3) After all subanswers, run a verification step: list items you are uncertain about.
4) Compose the FinalAnswer strictly matching the Output schema, and include mapping from schema fields to subquestion numbers.
Notes:
- Keep justifications concise (rationale-lite).
- Use the Notes field for working; FinalAnswer must be clean.
Begin.
Example: Research + Summarize (full example)
Task: Summarize current battery types for consumer EVs and recommend one for a commuter car.
Output schema: { "recommended_type": string, "pros": [string], "cons": [string], "confidence": 0-1, "trace": {"recommended_type": [subqIDs], ...} }
Model flow (abridged):
- Subq 1: What are the main battery chemistries used in consumer EVs?
- Notes 1: Li-ion NMC, LFP, NCA; short source citations.
- Subq 2: Compare energy density, cost, lifecycle for commuter use.
- Notes 2: Table-like bullets with numbers.
- Subq 3: What are the safety and charging differences?
- Notes 3: short facts.
- Verify: Uncertain about availability of LFP in 2026 for some markets.
- FinalAnswer: Fill schema using Notes 1-3 and mark trace links.
FinalAnswer (example):
{
"recommended_type": "LFP",
"pros": ["Lower cost", "Long cycle life", "Safer chemistry"],
"cons": ["Lower energy density", "May reduce range"],
"confidence": 0.75,
"trace": {"recommended_type": [1,2,3]}
}
Quick comparison: Self-Ask vs Scratchpad vs Rationale-Lite
| Technique | Purpose | When to use |
|---|---|---|
| Self-Ask / Subquestioning | Break tasks into verifiable subquestions | Complex multi-step tasks, research, multi-field outputs |
| Scratchpad / Notes | Workspace for messy reasoning | Always useful as intermediate storage, especially with Self-Ask |
| Rationale-Lite | Short justifications for trust and debugging | When you need interpretability but not full chain-of-thought |
Practical tips & pitfalls
- Keep subquestions atomic: "What is X?" not "How good is X and why?" Split if necessary.
- Limit recursion depth: avoid infinite decomposition. Set a max of 4–6 subquestions per branch.
- Enforce schema discipline: tell the model to only use verified notes for final fields.
- Watch for redundancy: the model may ask duplicate subquestions; deduplicate automatically or instruct it to check existing subquestions first.
- Use self-checks: ask the model to rate confidence per subanswer and list sources.
- Beware of overfitting: if your subquestions leak future info, the model might bake in assumptions — keep them factual.
Evaluation checklist (for prompts and outputs)
- Completeness: Do the subquestions cover all schema fields?
- Correctness: Can each final field be traced to a subquestion with evidence?
- Conciseness: Rationale-lite justifications only.
- Parsability: Final output strictly conforms to the schema.
- Transparency: Are uncertainties and sources exposed in Notes?
Closing mic-drop
Self-Ask is the practice of making the model be its own tiny detective: ask neat, bite-sized questions, answer them with short evidence-backed notes, and then only feed verified facts into a structured final answer. It's the sweet spot between freeform chain-of-thought (too messy), and terse, unexplainable outputs (too black-box). Use it with a scratchpad for thinking, rationale-lite for trust, and rigid output schemas for downstream consumption — and you'll get results that are accurate, debuggable, and actually useful.
Now go decompose something glorious. What tiny question will you ask first?
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!