ROSES Framework
Five components that ground every prompt in a concrete situation. ROSES combines Role, Objective, Scenario, Expected Solution, and Steps to ensure AI responses are anchored in reality — not generic advice, but guidance tailored to your specific circumstances.
Origin: The ROSES framework emerged in 2024 as a practical prompting structure designed to solve a common problem: AI responses that are technically correct but generically unhelpful. While most frameworks define what you want the AI to do, ROSES uniquely emphasizes where you are doing it through its Scenario component. By embedding a specific, concrete situation into the prompt, ROSES forces the AI to generate contextually grounded responses rather than one-size-fits-all advice. The Steps component further ensures outputs include actionable process guidance, not just declarative answers.
Modern LLM Status: ROSES is particularly effective for operational, procedural, and consulting-style tasks. Modern LLMs like Claude, GPT-4, and Gemini produce significantly better outputs when given a concrete scenario rather than an abstract goal. The Scenario component addresses the “helpful but vague” problem — the tendency of AI to generate generic best practices instead of situation-specific recommendations. ROSES is especially valuable in professional contexts where the right answer depends heavily on the specific circumstances: industry, company size, team maturity, technical constraints, and time horizons.
Ground the AI in Your Reality
When you ask AI “how should I handle a difficult employee conversation?”, you get a generic playbook. When you describe the specific scenario — a senior engineer who has missed three deadlines since returning from parental leave, on a team that is already understaffed — you get a nuanced, contextually appropriate response. The difference is not the AI’s capability; it is the specificity of the situation you provide.
ROSES makes scenario specificity a first-class component of the prompt. The Role sets the expert perspective. The Objective defines the goal. The Scenario paints the concrete situation with its unique variables. The Expected Solution describes the type of output needed. The Steps request actionable process guidance. Together, these five components ensure the AI is not just answering a question — it is solving your problem in your context.
Think of ROSES like briefing a consultant: you would not just tell them the problem — you would describe the specific situation, the constraints, the stakeholders, and the kind of deliverable you need. ROSES structures that same briefing for AI.
Abstract prompts get abstract answers. “Help me improve team communication” produces a list of tips you could find in any management book. A ROSES Scenario like “my 8-person remote team across 3 time zones has a 6-hour overlap window, and async updates in Slack are being missed because channels are too noisy” produces specific, implementable recommendations tailored to that exact situation. The scenario is the difference between advice and a solution.
The ROSES Process
Five components that build a scenario-grounded prompt
Role — Assign the Expert Perspective
Define the professional identity or expert lens the AI should adopt. The Role shapes how the AI interprets the scenario and frames its recommendations. A DevOps engineer and a CTO will analyze the same infrastructure problem from very different angles — the Role determines which perspective you need.
“Act as a senior DevOps engineer with 8 years of experience managing cloud infrastructure for mid-size SaaS companies. You specialize in incident response and have led post-mortems for P1 outages.”
Objective — Define the Goal
State what you want to achieve. The Objective anchors the AI on the specific outcome, preventing it from wandering into tangential recommendations. Be clear about whether you need analysis, a plan, a document, or a decision framework.
“Create an incident response runbook for database failover scenarios, including escalation procedures, rollback criteria, and communication templates for stakeholders.”
Scenario — Describe the Specific Situation
Paint the concrete, real-world situation with its unique variables, constraints, and stakeholders. The Scenario is the signature component of ROSES — it transforms a generic request into a situation-specific one. Include the who, what, when, where, and the specific constraints that make this situation unique.
“We run a PostgreSQL primary with two read replicas on AWS RDS. Our platform serves 15,000 concurrent users during peak hours (9am–5pm EST). Last month, a failover took 12 minutes and caused partial data inconsistency that required 3 hours of manual reconciliation. Our on-call rotation has 4 engineers, two of whom are relatively new to the system.”
Expected Solution — Specify the Output Type
Describe the format and nature of the response you need. Expected Solution tells the AI whether you want a document, a checklist, a decision tree, a script, or a strategic recommendation. It also sets scope boundaries — whether you need a comprehensive plan or a focused recommendation.
“A step-by-step runbook formatted as a decision tree. Each branch should include: trigger conditions, specific commands to run, verification checks, and escalation criteria. Include a separate communication template section for status updates to engineering, product, and customer success teams.”
Steps — Request Process Guidance
Ask for the specific sequence of actions, decisions, or phases that should be followed. Steps ensures the AI does not just describe what to do but explains the order and logic of how to do it. This component is what transforms a recommendation into an executable plan.
“Break the runbook into phases: Detection (how we know there is a problem), Assessment (how we determine severity), Action (what we do), Verification (how we confirm resolution), and Retrospective (what we document afterward). Each phase should have numbered steps that even a junior on-call engineer can follow. All commands and procedures should be verified in a staging environment before production use.”
See the Difference
Why scenario-grounded prompts produce more actionable guidance
Abstract Prompt
How should I handle a difficult conversation with an underperforming employee?
Prepare for the conversation by documenting specific examples. Choose a private setting. Start with positive feedback before addressing concerns. Use “I” statements. Listen actively. Set clear expectations and a follow-up timeline. Document the conversation.
ROSES Prompt
R: Engineering manager with 3 years of experience leading a team of 6.
O: Have a supportive but honest conversation about missed deadlines.
S: Senior engineer, top performer for 2 years, has missed 3 sprint commitments since returning from 4 months of parental leave. Team is understaffed and cannot absorb the slack. No prior performance issues.
E: A conversation script with specific talking points and likely responses.
Steps: Opening, acknowledging context, addressing the pattern, collaborative problem-solving, agreeing on next steps.
Opening (2 min): “I wanted to check in on how things are going since your return. I know the transition back has been significant, and I want to make sure we are setting you up for success, not just piling on work.”
Acknowledging Context (3 min): “Before we get into the specifics, I want to say clearly: your contributions over the past two years have been exceptional. This conversation is not about your capability — it is about making sure the current workload and pace are sustainable for you...”
Natural Language Works Too
While structured frameworks and contextual labels are powerful tools, LLMs are exceptionally good at understanding natural language. As long as your prompt contains the actual contextual information needed to create, answer, or deliver the response you’re looking for — the who, what, why, and constraints — the AI can produce complete and accurate results whether you use a formal framework or plain conversational language. But even in 2026, with the best prompts, verifying AI output is always a necessary step.
ROSES in Action
See how scenario-grounded prompts produce situation-specific guidance
Role: Act as a cloud solutions architect with experience migrating legacy on-premise systems to AWS for regulated industries.
Objective: Create a risk assessment for migrating our core billing system from on-premise servers to AWS, with mitigation strategies for each identified risk.
Scenario: We are a healthcare billing company processing $2.3B in claims annually. Our current system runs on physical servers in a co-located data center with 99.95% uptime over the past 3 years. We must maintain HIPAA compliance throughout the migration. Our engineering team of 12 has limited AWS experience (2 certified). The migration must be completed within 6 months due to a data center lease expiration. We cannot afford more than 4 hours of total downtime during the transition.
Expected Solution: A risk matrix with severity ratings (Critical/High/Medium/Low), probability assessments, specific mitigation strategies for each risk, and a prioritized action list.
Steps: Organize the assessment into phases: Pre-Migration risks, Migration-Day risks, Post-Migration risks, and Ongoing Compliance risks. Each risk should include a mitigation plan with owner, timeline, and verification criteria. All compliance recommendations should be reviewed with your legal and compliance team before implementation.
Risk Assessment: Billing System Cloud Migration
Phase 1: Pre-Migration Risks
CRITICAL — HIPAA Compliance Gap During Transition
Probability: High | Impact: Critical
Description: Current data handling procedures are designed for on-premise infrastructure. AWS requires different encryption, access control, and audit logging configurations to maintain HIPAA compliance.
Mitigation: Engage a HIPAA-specialized AWS consulting partner (weeks 1–4). Complete a pre-migration compliance audit. Implement AWS HIPAA-eligible services only (RDS with encryption, S3 with server-side encryption, CloudTrail for audit logs).
Owner: CISO + AWS Partner | Timeline: Weeks 1–6 | Verification: Third-party compliance audit before go-live.
HIGH — Team Skill Gap
Probability: High | Impact: High
Description: 10 of 12 engineers lack AWS certification. Risk of misconfiguration during migration...
Mitigation: Accelerated AWS training program (weeks 1–8). Pair each migration task with a certified engineer as lead...
Role: Act as a veteran middle school teacher and instructional coach with 15 years of experience supporting new teachers in urban school settings.
Objective: Create a practical classroom management plan for my first year of teaching, focusing on building rapport and establishing routines rather than punitive discipline.
Scenario: I am a first-year 7th grade English teacher in a Title I school with 28 students per class, 5 classes per day. About 40% of my students are English language learners. The school has a positive behavior intervention system (PBIS) but enforcement is inconsistent across classrooms. I have 15 minutes between classes and no teaching assistant. Several students have IEPs with behavioral goals.
Expected Solution: A week-by-week plan for the first month, with specific routines, scripts for common situations, and strategies adapted for ELL students and students with IEPs.
Steps: Cover these phases: First Day procedures, Week 1 routines, Weeks 2–3 refinement, and Week 4 assessment. Include specific language I can use, not just strategies. All approaches should be reviewed with your school administration and special education coordinator before implementation.
First-Year Classroom Management Plan: Month 1
First Day Procedures
Greeting Routine (practice this — it sets everything):
Stand at the door. Make eye contact. Say each student’s name (practice pronunciation beforehand — for your ELL students, this is not optional, it is the first act of respect). Script: “Welcome to Room 214. I am Ms./Mr. [Name]. Find the seat with your name card — there is something on your desk to start.”
Desk Activity (while you greet the rest):
A low-stakes, visual activity that requires no reading comprehension barrier: “Draw or write 3 things you want me to know about you.” Provide sentence starters in English and Spanish: “I like... / Me gusta...”, “I am good at... / Soy bueno/a en...”...
Role: Act as a Head of Product Marketing for a B2B SaaS company, experienced in launching features to enterprise clients with complex buying committees.
Objective: Develop a launch communication strategy for a new AI-powered analytics feature that targets both existing customers (upsell) and new prospects.
Scenario: We are launching an AI analytics dashboard as an add-on to our project management platform. Pricing is $15/user/month on top of the base plan. Beta testers reported 40% time savings on weekly reporting. However, two enterprise beta clients raised concerns about data privacy (their data being used to train AI models). Our competitor launched a similar feature last month at $10/user/month but with fewer capabilities. We have 4 weeks until launch and a marketing budget of $50,000.
Expected Solution: A multi-channel launch plan with specific messaging for each audience segment, addressing the pricing objection and privacy concern proactively.
Steps: Organize into: Pre-Launch (weeks 1–2), Launch Week, and Post-Launch (weeks 1–2). Each phase should include specific channels, messaging, and success metrics. All pricing claims and competitive comparisons should be verified for accuracy before publication.
AI Analytics Launch Plan: 4-Week Strategy
Pre-Launch: Weeks 1–2
Priority 1: Neutralize the Privacy Concern
Channel: Dedicated landing page + email to all enterprise accounts
Message: “Your data stays yours. Our AI analytics processes your data in isolated environments and never uses customer data for model training. Here is exactly how it works.”
Action: Publish a technical whitepaper on data architecture (have engineering co-author for credibility). Include a third-party security audit badge if available.
Success Metric: Page views >2,000, enterprise support ticket volume on privacy <10.
Priority 2: Competitive Positioning
Message Strategy: Do not compete on price. Compete on value. “Our analytics saves teams 40% of their reporting time. That is [X] hours per month per user. At $15/user, the ROI pays for itself in week one.”...
When to Use ROSES
Best for situation-specific problems that need actionable process guidance
Perfect For
Situations where the right answer depends entirely on the specific context — company size, industry, constraints, and stakeholder dynamics. ROSES ensures the AI consults on your problem, not a generic one.
Creating step-by-step procedures for specific environments — incident response, deployment processes, or compliance workflows tailored to your exact tech stack and team.
Developing lesson plans, coaching scripts, or training materials that must account for the specific learner population, institutional constraints, and available resources.
Go-to-market plans, hiring strategies, or product roadmaps where budget, timeline, team size, and competitive dynamics shape the recommended approach.
Skip It When
Questions with universal answers — “What is Agile methodology?” — do not benefit from scenario specificity. Use simpler prompts or direct questions.
Brainstorming, creative writing, or ideation tasks where over-specifying the scenario constrains the creative output. Use BAB for narrative or free-form prompts for exploration.
When the primary need is calibrating tone, style, and audience rather than grounding in a scenario. Use CO-STAR for pure communication tasks.
Use Cases
Where ROSES delivers the most value
Incident Response
Build runbooks and response plans tailored to your specific infrastructure, team structure, SLAs, and regulatory requirements — not generic incident response templates.
Curriculum Development
Design lesson plans and learning paths that account for specific student populations, institutional resources, time constraints, and learning objectives.
Implementation Guides
Create step-by-step implementation plans for new tools, processes, or systems that reflect your team’s actual skill levels, existing tech stack, and timeline.
Management Coaching
Get specific guidance for difficult management situations — performance conversations, team conflicts, or organizational changes — grounded in the actual people and dynamics involved.
Clinical Decision Support
Generate differential diagnosis frameworks or treatment plan outlines grounded in specific patient presentations, comorbidities, and institutional protocols — always for review by qualified clinicians.
Market Entry Strategy
Develop go-to-market plans grounded in your specific competitive landscape, budget constraints, team capabilities, and target market characteristics.
Where ROSES Fits
ROSES bridges generic role-based prompting and full scenario-driven consulting
The more specific your Scenario component, the more valuable the AI’s output becomes. Instead of “a growing startup,” say “a 45-person Series B startup with $12M ARR and a 3-month runway concern.” Instead of “a difficult employee,” describe the actual performance pattern, tenure, and team dynamics. Specificity is not extra work — it is the investment that transforms generic advice into a usable solution. When sharing scenarios with AI, ensure you do not include confidential or personally identifiable information without appropriate safeguards.
Related Techniques & Frameworks
Explore complementary approaches to structured prompting
Build Your ROSES Prompt
Ground your next prompt in a concrete scenario or find the right framework for your specific task.