Decomposition Technique

Self-Ask Prompting

Complex questions hide simpler ones inside them. Self-Ask forces the model to surface those hidden sub-questions explicitly — asking and answering each one in sequence before assembling a final, well-supported answer.

Technique Context: 2022

Introduced: Self-Ask was published in 2022 by Press et al. The technique addresses multi-hop reasoning — questions that require chaining together multiple facts to reach an answer. Instead of hoping the model silently connects the dots, Self-Ask introduces a structured protocol: the model explicitly asks “Are follow-up questions needed here?” then generates and answers each sub-question before synthesizing a final response. The original paper paired Self-Ask with a search engine for intermediate lookups, anticipating retrieval-augmented generation (RAG) patterns.

Modern LLM Status: The core insight of Self-Ask — decomposing complex questions into explicit sub-questions — has been largely absorbed into standard LLM behavior. Claude, GPT-4, and Gemini naturally break down multi-hop questions internally. However, Self-Ask’s explicit “Follow-up question / Intermediate answer” format remains valuable when you need fully transparent, auditable reasoning chains where each inference step is visible and verifiable. The technique is most relevant today for interpretability, debugging, and educational contexts.

The Core Insight

Make the Model Question Itself

Many questions appear simple on the surface but actually require connecting multiple facts in sequence. “Was the president who signed NAFTA born before or after the moon landing?” seems straightforward, but it hides two prerequisite questions: who signed NAFTA, and when were they born? Standard prompting asks the model to make these connections silently, where errors can hide undetected.

Self-Ask flips the process into the open. The model is instructed to pause and ask: “Are follow-up questions needed here?” When the answer is yes, it generates each sub-question explicitly, answers it, then uses that intermediate answer to fuel the next step. The reasoning chain becomes a visible, step-by-step interrogation rather than a black-box leap.

Think of it like a detective who, instead of jumping to a conclusion, writes down each question they need to answer on a whiteboard — then methodically solves each one before connecting the evidence into a final verdict.

Why Explicit Questions Beat Implicit Reasoning

When a model reasons silently through a multi-hop question, it can skip steps, conflate facts, or quietly substitute one entity for another — and you would never know. Self-Ask’s explicit sub-question format makes every inference visible: if the model gets the wrong intermediate answer, you can see exactly where the chain broke. This transforms opaque reasoning into a debuggable, auditable trail.

The Self-Ask Process

Four stages from complex question to verified answer

1

Receive the Complex Question

Start with a question that requires multiple facts or reasoning steps to answer. The question should involve connecting information that no single lookup can resolve — these are called compositional or multi-hop questions.

Example

“Did the company that created the iPhone also make the first personal computer to sell one million units?”

2

Ask “Are Follow-Up Questions Needed?”

The model evaluates the question and determines whether it can answer directly or needs to decompose it first. This decision gate is the signature move of Self-Ask — it forces the model to consciously assess complexity before proceeding rather than guessing at an answer.

Example

“Are follow-up questions needed here: Yes.” — The model recognizes it needs to identify the iPhone creator and the first million-unit PC separately.

3

Generate and Answer Sub-Questions Sequentially

The model produces each sub-question and its intermediate answer in order. Each subsequent question can build on the answers already obtained. This sequential chaining ensures that information flows correctly from one step to the next, and each intermediate answer is recorded explicitly.

Example

Follow-up: What company created the iPhone?
Intermediate answer: Apple Inc.
Follow-up: What was the first personal computer to sell one million units?
Intermediate answer: The Apple II.
Follow-up: What company made the Apple II?
Intermediate answer: Apple Inc. (then Apple Computer, Inc.)

4

Synthesize the Final Answer

With all intermediate answers in hand, the model combines them into a clear, well-supported final response. The answer is grounded in the explicit evidence chain rather than an unsupported assertion. Anyone reviewing the output can trace the logic from question to sub-questions to conclusion.

Example

So the final answer is: Yes. Apple Inc. created the iPhone and also made the Apple II, which was the first personal computer to sell one million units.

See the Difference

Why explicit sub-questions produce more reliable answers

Direct Answer

Prompt

Were the economic policies that caused the 2008 financial crisis enacted during a Republican or Democratic presidency?

Response

The 2008 financial crisis was caused by policies enacted during the Bush administration, so Republican.

Oversimplified, skips crucial context, no visible reasoning
VS

Self-Ask

Decomposed Reasoning

Follow-up: What policies contributed to the 2008 crisis?
Intermediate: Key factors include the Gramm-Leach-Bliley Act (1999) and the Commodity Futures Modernization Act (2000), plus regulatory failures in mortgage oversight.
Follow-up: Who was president when those acts were signed?
Intermediate: Bill Clinton (Democrat) signed both acts.
Follow-up: Were there additional contributing factors under later administrations?
Intermediate: Yes — the Bush administration (Republican) maintained deregulatory policies and oversight gaps that amplified the risk.

Final Answer

The crisis resulted from policies enacted across both parties — foundational deregulation under Clinton (Democrat) and continued deregulatory posture under Bush (Republican). Attributing it to a single presidency oversimplifies the multi-administration causal chain.

Nuanced, traceable, each inference step visible and verifiable

Natural Language Works Too

While structured frameworks and contextual labels are powerful tools, LLMs are exceptionally good at understanding natural language. As long as your prompt contains the actual contextual information needed to create, answer, or deliver the response you’re looking for — the who, what, why, and constraints — the AI can produce complete and accurate results whether you use a formal framework or plain conversational language. But even in 2026, with the best prompts, verifying AI output is always a necessary step.

Self-Ask in Action

See how explicit sub-questions improve multi-hop reasoning

Original Question

“Was the iPhone invented in the same state where President Gerald Ford was born?”

Self-Ask Decomposition

Are follow-up questions needed here? Yes.

Follow-up: Where was the iPhone invented?
Intermediate answer: The iPhone was developed by Apple Inc. in Cupertino, California.

Follow-up: Where was President Gerald Ford born?
Intermediate answer: Gerald Ford was born in Omaha, Nebraska.

Follow-up: Are California and Nebraska the same state?
Intermediate answer: No, California and Nebraska are different states.

So the final answer is: No. The iPhone was invented in California, while Gerald Ford was born in Nebraska.

Original Question

“Did the treaty that ended World War I contribute to the conditions that started World War II?”

Self-Ask Decomposition

Are follow-up questions needed here? Yes.

Follow-up: What treaty ended World War I?
Intermediate answer: The Treaty of Versailles, signed in 1919.

Follow-up: What were the key provisions of the Treaty of Versailles?
Intermediate answer: It imposed heavy reparations on Germany, required territorial concessions, limited Germany’s military, and assigned war guilt to Germany (Article 231).

Follow-up: How did these provisions affect Germany in the 1920s and 1930s?
Intermediate answer: The reparations contributed to economic instability, hyperinflation, and widespread resentment. These conditions fueled nationalist movements, including the rise of the Nazi Party.

Follow-up: Did these conditions contribute to the start of World War II?
Intermediate answer: Yes. Historians broadly agree that the punitive terms of Versailles created political and economic conditions that Adolf Hitler exploited to gain power and pursue aggressive expansion.

So the final answer is: Yes. The Treaty of Versailles imposed conditions on Germany that contributed to the economic hardship and political instability from which World War II emerged, though it was one of several contributing factors rather than the sole cause.

Original Question

“Was the element gallium discovered before or after the scientist who predicted its existence died?”

Self-Ask Decomposition

Are follow-up questions needed here? Yes.

Follow-up: Who predicted the existence of gallium?
Intermediate answer: Dmitri Mendeleev predicted gallium’s existence in 1871, calling it “eka-aluminium” based on gaps in his periodic table.

Follow-up: When was gallium actually discovered?
Intermediate answer: Gallium was discovered in 1875 by Paul Emile Lecoq de Boisbaudran.

Follow-up: When did Dmitri Mendeleev die?
Intermediate answer: Mendeleev died on February 2, 1907.

Follow-up: Was 1875 before 1907?
Intermediate answer: Yes, 1875 is 32 years before 1907.

So the final answer is: Gallium was discovered before Mendeleev died. He predicted it in 1871, it was found in 1875, and he lived until 1907 — meaning he witnessed the confirmation of his prediction during his lifetime.

When to Use Self-Ask

Best for multi-hop questions that demand transparent reasoning

Perfect For

Multi-Hop Questions

Questions that chain multiple facts together — “Who was the president when the company founded by the inventor of the telephone went public?”

Reasoning Audits

When you need to verify every step of the model’s logic — each sub-question and answer creates a checkable evidence trail.

Educational Contexts

Teaching students how complex questions decompose — Self-Ask makes the hidden reasoning structure visible and instructive.

Fact-Checking Pipelines

Breaking claims into verifiable sub-claims — each intermediate answer can be independently confirmed against trusted sources.

Skip It When

Single-Hop Questions

Questions answerable with a single fact lookup — “What is the capital of France?” needs no decomposition.

Speed-Critical Applications

When latency matters more than transparency — Self-Ask adds tokens and processing time for each sub-question and answer pair.

Creative or Open-Ended Tasks

Writing, brainstorming, or opinion-based questions where there is no factual chain to decompose — Self-Ask is designed for factual multi-hop reasoning.

Use Cases

Where Self-Ask delivers the most value

Research Analysis

Decompose complex research questions into verifiable sub-inquiries, ensuring each factual claim is independently supported before drawing conclusions.

Legal Due Diligence

Break regulatory compliance questions into jurisdiction-specific sub-questions, tracing each requirement back to its statutory source.

Medical Reasoning

Decompose differential diagnosis questions into symptom-by-symptom analysis, making each diagnostic inference explicit and reviewable by clinicians.

Customer Support Escalation

When a support issue spans multiple systems or policies, decompose the customer’s problem into sub-questions that each map to a specific knowledge base article.

Security Incident Analysis

Trace attack vectors through explicit sub-questions: what vulnerability was exploited, what systems were affected, what data was accessed, and what was the blast radius.

Financial Analysis

Break investment thesis questions into sub-analyses: market conditions, company fundamentals, competitive positioning, and risk factors — each answered and sourced independently.

Where Self-Ask Fits

Self-Ask bridges implicit reasoning and structured decomposition

Chain-of-Thought Implicit Steps Reasoning as continuous prose
Self-Ask Explicit Questions Structured sub-question protocol
Decomposed Prompting Modular Sub-Tasks Dedicated handlers per sub-problem
Graph of Thought Non-Linear Paths Interconnected reasoning nodes
Combine with Search

Self-Ask was originally designed to pair each sub-question with a search engine lookup — making it a precursor to modern retrieval-augmented generation (RAG). In production systems, you can route each “Follow-up question” to a knowledge base or API, then feed the retrieved answer back as the “Intermediate answer” for maximum factual grounding.

Decompose Your Questions

Try Self-Ask decomposition on your own complex questions or build reasoning-enhanced prompts with our tools.