RACE Framework
Four clear components that turn a vague request into a structured directive. RACE defines the Role, Action, Context, and Expected Output — giving the AI everything it needs to deliver precisely what you want.
Origin: RACE is a community-developed structured prompting framework from 2024. It provides a simple four-part structure that is easy to remember and apply. The acronym stands for Role, Action, Context, and Expected Output — four dimensions that cover the essential elements of any well-formed prompt. While it overlaps significantly with CO-STAR and other role-based frameworks, its simplicity makes it particularly accessible for beginners who are just starting to learn prompt engineering.
Modern LLM Status: RACE remains a practical entry point for structured prompting. Its four components map directly to the information modern LLMs need to produce focused outputs. More advanced frameworks like CO-STAR add dimensions for tone and style, but RACE’s stripped-down approach is often sufficient for everyday tasks. When you need quick structure without overthinking, RACE provides a reliable minimum — ensuring you always specify who the AI should be, what it should do, what background it needs, and what the output should look like.
Four Questions Every Prompt Should Answer
Most weak prompts fail because they leave out at least one critical piece of information. You might tell the AI what to do but not who to be. Or you assign a role but forget to describe the context. Or you provide everything except what format you want the answer in. RACE ensures you cover all four bases every time.
Role establishes the AI’s identity and expertise. Action specifies the task to perform. Context provides the background information the AI cannot infer on its own. Expected Output defines the format, length, and quality of the response. Together, these four components create a complete brief that eliminates ambiguity.
Think of it like assigning a task to a new team member: you would not just say “write something about marketing.” You would tell them their role, what to produce, the context they need, and what the deliverable should look like.
Frameworks do not need to be complex to be effective. RACE’s strength is that it is almost impossible to forget a component — the acronym itself is a checklist. For users who find longer frameworks overwhelming, RACE provides just enough structure to dramatically improve prompt quality without adding cognitive overhead. The best framework is the one you actually use consistently.
The RACE Process
Four components that structure every effective prompt
Role
Define who the AI should be. This is not just a job title — it is a lens that shapes vocabulary, priorities, and depth of expertise. A “pediatric nurse” answers differently than a “medical researcher” even when given the same question. The role activates the right domain knowledge and communication style.
“You are a senior data analyst with 10 years of experience in e-commerce metrics.”
Action
Specify exactly what the AI should do. Use a clear, concrete verb — “analyze,” “draft,” “compare,” “summarize.” Vague actions like “help with” or “look at” produce vague results. The more precise the action verb, the more focused the output.
“Analyze the conversion rate trends from our Q4 2025 data and identify the three biggest drop-off points in our checkout funnel.”
Context
Provide the background information the AI needs but cannot know on its own. This includes relevant data, constraints, audience details, prior decisions, or domain-specific circumstances. Context is what separates a generic response from a situationally accurate one.
“Our e-commerce store serves primarily mobile users (78% of traffic). We recently redesigned the checkout flow in October 2025, and conversion rates dropped 12% compared to the previous quarter.”
Expected Output
Define what the deliverable should look like. Specify the format (bullet points, table, narrative), length (brief summary vs. detailed report), and any quality criteria. Without this, the AI guesses — and its guess may not match your needs.
“Provide a structured report with: (1) a summary table of drop-off points with percentage impact, (2) a root cause analysis for each, and (3) three prioritized recommendations. Keep it under 500 words.”
See the Difference
How RACE transforms a vague prompt into a precision tool
Unstructured Prompt
Help me write something about our product launch.
Here is a general product launch announcement: “We are excited to announce our new product...” [Generic, unfocused content with no clear audience, format, or purpose]
RACE Prompt
Role: You are a tech PR specialist with startup experience.
Action: Draft a press release announcing our AI-powered scheduling tool.
Context: We are a 20-person startup launching our first B2B product. Target media outlets are TechCrunch and The Verge. The tool reduces meeting scheduling time by 60%.
Expected Output: A 400-word press release with a compelling headline, three key benefit bullets, a founder quote placeholder, and boilerplate. AP style.
A polished, publication-ready press release with a news-worthy headline, quantified benefits, proper AP style formatting, and all requested sections — tailored to B2B tech media. Always verify quoted statistics and claims before publishing.
Natural Language Works Too
While structured frameworks and contextual labels are powerful tools, LLMs are exceptionally good at understanding natural language. As long as your prompt contains the actual contextual information needed to create, answer, or deliver the response you’re looking for — the who, what, why, and constraints — the AI can produce complete and accurate results whether you use a formal framework or plain conversational language. But even in 2026, with the best prompts, verifying AI output is always a necessary step.
RACE in Action
See how four components produce focused, usable outputs
Role: You are a professional business development manager.
Action: Draft a follow-up email to a potential client who attended our product demo last week but has not responded to our initial outreach.
Context: The client is a mid-size logistics company evaluating fleet management software. They seemed interested in our route optimization feature during the demo. Our competitor is also in their evaluation pipeline. The tone should be professional but warm, not pushy.
Expected Output: A concise email (under 150 words) with a clear subject line, a reference to the demo, one specific value proposition, and a soft call-to-action. No attachments referenced.
Each RACE component eliminates a different source of ambiguity. The Role sets the professional tone. The Action specifies exactly what type of email. The Context provides the relationship history and competitive dynamic. The Expected Output constrains length and structure. The result is a targeted, ready-to-send email — not a generic template. Always review AI-drafted communications before sending to ensure accuracy and appropriate tone.
Role: You are a senior technical writer specializing in API documentation.
Action: Write a quick-start guide for our user authentication API endpoint.
Context: The audience is junior developers integrating our REST API for the first time. The endpoint uses OAuth 2.0 with bearer tokens. Common mistakes include forgetting the Content-Type header and using expired tokens. Our documentation style follows the “explain, show, warn” pattern.
Expected Output: A step-by-step guide with: (1) prerequisites checklist, (2) code example in Python using the requests library, (3) expected success response, (4) common errors table with solutions. Use markdown formatting.
The Role activates technical writing expertise. The Action narrows the scope to one endpoint. The Context reveals the audience level and common pitfalls to address proactively. The Expected Output specifies the exact structure and format. Without the context about common mistakes, the guide would miss the most valuable troubleshooting content. Always verify code examples and API responses against your actual system before publishing.
Role: You are an experienced high school biology teacher who makes complex topics accessible through everyday analogies.
Action: Create a lesson explanation for how mRNA vaccines work.
Context: This is for 10th-grade students with basic cell biology knowledge. Many students have heard misinformation about mRNA vaccines. The explanation should be scientifically accurate while being engaging and age-appropriate. Avoid jargon where possible.
Expected Output: A 300-word explanation using at least two relatable analogies, structured as: (1) what mRNA is, (2) how the vaccine delivers instructions, (3) how the immune response works. End with two discussion questions for the class.
The Role establishes the teaching style with analogies. The Action defines the specific topic. The Context reveals the audience level and the misinformation challenge. The Expected Output requests a precise structure with pedagogical elements. Note how the context about misinformation shapes the response: a good teacher preemptively addresses misconceptions rather than ignoring them. Always have subject matter experts verify scientific content generated by AI.
When to Use RACE
Best for quick, structured prompts that cover the essentials
Perfect For
Emails, reports, summaries, and content drafts where you need reliable output quality without spending time on advanced framework dimensions.
New users learning structured prompting for the first time — RACE’s four-letter acronym is easier to remember and apply than six- or seven-component frameworks.
When you need to iterate quickly and test ideas — RACE provides enough structure to get usable outputs without over-engineering the prompt.
When you need a simple, shared prompting standard across a team — RACE is easy to teach, easy to remember, and easy to audit for completeness.
Skip It When
When tone and audience calibration are critical — RACE lacks explicit dimensions for tone and audience. Use CO-STAR instead for these scenarios.
Mathematical proofs, multi-step logic, or research analysis where the challenge is reasoning quality rather than task specification. Use Chain-of-Thought or Self-Ask.
Tasks that require sequencing multiple actions, decision gates, or iterative refinement. RISEN or AgentFlow handle multi-step processes better than RACE’s flat structure.
Use Cases
Where RACE delivers the most value
Email Drafting
Structure any business email with a clear role, purpose, context, and format specification — from follow-ups to cold outreach to internal updates.
Report Generation
Generate structured reports by defining the analyst role, analysis task, data context, and exact report format you need delivered.
Content Creation
Blog posts, social media content, and marketing copy — RACE ensures every piece has a clear voice, purpose, background, and deliverable format.
Customer Support Scripts
Build response templates with a defined support agent role, specific resolution actions, customer context, and consistent response format.
Training Materials
Create educational content by specifying the instructor role, teaching objective, learner context, and desired material format.
Project Briefs
Draft project proposals and briefs by defining your strategic role, the scope of work, project constraints, and the deliverable document format.
Where RACE Fits
RACE bridges unstructured prompts and comprehensive frameworks
Think of RACE as the minimum viable framework for structured prompting. Just as a minimum viable product includes only the essential features needed to be useful, RACE includes only the four dimensions that every good prompt must address. If your outputs are consistently good with RACE alone, you do not need a more complex framework. If you find yourself needing more control over tone, audience, or multi-step workflows, that is when you graduate to CO-STAR, RISEN, or other specialized approaches.
Related Techniques & Frameworks
Explore complementary structured prompting approaches
Build Your RACE Prompt
Structure your next prompt with all four RACE components or find the right framework for your specific task.