jypi
  • Explore
ChatWays to LearnMind mapAbout

jypi

  • About Us
  • Our Mission
  • Team
  • Careers

Resources

  • Ways to Learn
  • Mind map
  • Blog
  • Help Center
  • Community Guidelines
  • Contributor Guide

Legal

  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Content Policy

Connect

  • Twitter
  • Discord
  • Instagram
  • Contact Us
jypi

© 2026 jypi. All rights reserved.

Generative AI: Prompt Engineering Basics
Chapters

1Foundations of Generative AI

2LLM Behavior and Capabilities

3Core Principles of Prompt Engineering

4Writing Clear, Actionable Instructions

5Roles, Personas, and System Prompts

6Supplying Context and Grounding

7Examples: Zero-, One-, and Few-Shot

8Structuring Outputs and Formats

9Reasoning and Decomposition Techniques

Outline-Then-Detail PatternScratchpad and Notes FieldsRationale-Lite ApproachesSelf-Ask and SubquestioningHypothesis GenerationBack-Solving StrategiesPlan-Then-Execute SplitCompare-and-Contrast PromptsConstraint PropagationUncertainty and Confidence CuesVerification Steps FirstSanity Checks and EstimationSocratic Questioning PromptsEliminating Irrelevant PathsChain-of-Thought Considerations

10Iteration, Testing, and Prompt Debugging

11Evaluation, Metrics, and Quality Control

12Safety, Ethics, and Risk Mitigation

13Tools, Functions, and Agentic Workflows

14Retrieval-Augmented Generation (RAG)

15Multimodal and Advanced Prompt Patterns

Courses/Generative AI: Prompt Engineering Basics/Reasoning and Decomposition Techniques

Reasoning and Decomposition Techniques

26699 views

Elicit better thinking with outline-first strategies, hypothesis testing, and verification-first prompting.

Content

4 of 15

Self-Ask and Subquestioning

Self-Ask: The Charming Interrogator
3573 views
intermediate
humorous
generative-ai
gpt-5-mini
3573 views

Versions:

Self-Ask: The Charming Interrogator

Watch & Learn

AI-discovered learning video

Sign in to watch the learning video for this topic.

Sign inSign up free

Start learning for free

Sign up to save progress, unlock study materials, and track your learning.

  • Bookmark content and pick up later
  • AI-generated study materials
  • Flashcards, timelines, and more
  • Progress tracking and certificates

Free to join · No credit card required

Self-Ask and Subquestioning — The Tiny Interrogator That Solves Big Problems

"If you can't ask the right small questions, you won't get the right big answers." — Probably me, three minutes ago, wearing a cape.

You're already familiar with: scratchpads and notes fields (where the model keeps its messy thinking) and rationale-lite (short, useful justifications instead of a novella of chain-of-thought). Now we move on to the surgical tool in that toolkit: Self-Ask and Subquestioning — a disciplined way to have the model interrogate itself, break problems into bite-sized queries, and stitch a reliable, scorable output back together in a structured schema (yes — tying into Structuring Outputs and Formats).


What is Self-Ask (quick definition)

Self-Ask is a decomposition strategy where a model explicitly generates subquestions about a main task, answers each subquestion (often using notes/scratchpad), and then composes those answers into a final response that matches a specified output schema. Think of it as turning a messy task into a to-do list of tiny checks, each one easy to verify.

Why it matters: It improves accuracy, makes hallucinations easier to catch, and produces modular outputs that are easier to parse, score, and reuse downstream — which is exactly what we care about after learning how to enforce output schemas.


How it fits with what you've learned

  • Scratchpad/Notes Field: Use it as the workspace for the model’s subquestions and intermediate answers. Keep this separate from the final answer.
  • Rationale-Lite: Keep each answer or justification short and factual. No epic monologues — just the facts needed to trust the step.
  • Structuring Outputs and Formats: Define the schema for the final composed answer from the start, and instruct the model to fill that schema using the verified subanswers.

A step-by-step recipe (scannable)

  1. Define the task and the final output schema you want (JSON, table, bullet list, etc.).
  2. Ask the model to generate subquestions that, if answered, would fully solve the task. Keep them atomic and verifiable.
  3. Have the model answer each subquestion in the notes/scratchpad with rationale-lite justification and sources if needed.
  4. Run a short self-check pass: verify completeness, check contradictions, and highlight uncertain items.
  5. Compose the final output strictly according to the schema, using only verified subanswers.
  6. Include a short confidence score and a trace linking final fields to subquestion IDs (for auditability).

Prompt pattern (practical template)

Use this as a base for your prompts. Replace the placeholders.

Task: [Describe the main task]
Output schema: [Describe JSON/table format to be returned]

Instructions:
1) Generate a sequential list of subquestions needed to solve the Task. Number them.
2) For each subquestion, provide a short answer (2-3 sentences) in the Notes section, with any source or calculation.
3) After all subanswers, run a verification step: list items you are uncertain about.
4) Compose the FinalAnswer strictly matching the Output schema, and include mapping from schema fields to subquestion numbers.

Notes:
- Keep justifications concise (rationale-lite).
- Use the Notes field for working; FinalAnswer must be clean.

Begin.

Example: Research + Summarize (full example)

Task: Summarize current battery types for consumer EVs and recommend one for a commuter car.

Output schema: { "recommended_type": string, "pros": [string], "cons": [string], "confidence": 0-1, "trace": {"recommended_type": [subqIDs], ...} }

Model flow (abridged):

  • Subq 1: What are the main battery chemistries used in consumer EVs?
    • Notes 1: Li-ion NMC, LFP, NCA; short source citations.
  • Subq 2: Compare energy density, cost, lifecycle for commuter use.
    • Notes 2: Table-like bullets with numbers.
  • Subq 3: What are the safety and charging differences?
    • Notes 3: short facts.
  • Verify: Uncertain about availability of LFP in 2026 for some markets.
  • FinalAnswer: Fill schema using Notes 1-3 and mark trace links.

FinalAnswer (example):

{
  "recommended_type": "LFP",
  "pros": ["Lower cost", "Long cycle life", "Safer chemistry"],
  "cons": ["Lower energy density", "May reduce range"],
  "confidence": 0.75,
  "trace": {"recommended_type": [1,2,3]}
}

Quick comparison: Self-Ask vs Scratchpad vs Rationale-Lite

Technique Purpose When to use
Self-Ask / Subquestioning Break tasks into verifiable subquestions Complex multi-step tasks, research, multi-field outputs
Scratchpad / Notes Workspace for messy reasoning Always useful as intermediate storage, especially with Self-Ask
Rationale-Lite Short justifications for trust and debugging When you need interpretability but not full chain-of-thought

Practical tips & pitfalls

  • Keep subquestions atomic: "What is X?" not "How good is X and why?" Split if necessary.
  • Limit recursion depth: avoid infinite decomposition. Set a max of 4–6 subquestions per branch.
  • Enforce schema discipline: tell the model to only use verified notes for final fields.
  • Watch for redundancy: the model may ask duplicate subquestions; deduplicate automatically or instruct it to check existing subquestions first.
  • Use self-checks: ask the model to rate confidence per subanswer and list sources.
  • Beware of overfitting: if your subquestions leak future info, the model might bake in assumptions — keep them factual.

Evaluation checklist (for prompts and outputs)

  • Completeness: Do the subquestions cover all schema fields?
  • Correctness: Can each final field be traced to a subquestion with evidence?
  • Conciseness: Rationale-lite justifications only.
  • Parsability: Final output strictly conforms to the schema.
  • Transparency: Are uncertainties and sources exposed in Notes?

Closing mic-drop

Self-Ask is the practice of making the model be its own tiny detective: ask neat, bite-sized questions, answer them with short evidence-backed notes, and then only feed verified facts into a structured final answer. It's the sweet spot between freeform chain-of-thought (too messy), and terse, unexplainable outputs (too black-box). Use it with a scratchpad for thinking, rationale-lite for trust, and rigid output schemas for downstream consumption — and you'll get results that are accurate, debuggable, and actually useful.

Now go decompose something glorious. What tiny question will you ask first?

Flashcards
Mind Map
Speed Challenge

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!

Ready to practice?

Sign up now to study with flashcards, practice questions, and more — and track your progress on this topic.

Study with flashcards, timelines, and more
Earn certificates for completed courses
Bookmark content for later reference
Track your progress across all topics