Community Framework

BORE Framework

Four components that lead with history and end with accountability. BORE distinguishes itself by starting with Background — the deeper context behind a request — and closing with Expectation — measurable criteria that define what success actually looks like.

Framework Context: 2024

Introduced: BORE gained traction in 2024 within professional and business AI communities as a concise framework optimized for workplace communication. The acronym stands for Background, Objective, Role, and Expectation — four components that mirror how effective business briefs are structured. BORE’s distinguishing feature is its emphasis on Background over generic “context” — encouraging users to provide the history, situation, and circumstances that shaped the current need, rather than just a surface-level description of what is happening now. This deeper contextual grounding helps the AI produce responses that account for organizational history and strategic nuance.

Modern LLM Status: BORE remains practical and well-suited for professional environments where prompts need to be efficient but thorough. Its four-component structure is lean enough for daily use while covering the dimensions that matter most in business communication. Whether you use Claude, GPT-4, or Gemini, providing historical background, a clear objective, an appropriate role, and measurable expectations consistently produces output that is more aligned with real-world professional needs than unstructured requests. BORE is especially effective in enterprise contexts where the AI needs to understand not just what you want, but why you want it and how you will measure its value.

The Core Insight

Background Is Not Context — It Is Deeper

Many frameworks ask for “context,” which users typically interpret as a brief description of the current situation. But the most effective prompts go further — they provide the history, the decisions that led to this moment, the constraints inherited from past choices, and the organizational dynamics that shape what is possible. BORE’s Background component explicitly asks for this deeper layer.

BORE treats every prompt as a mini business case. Background provides the narrative: what happened before, what constraints exist, and why this task matters now. Objective defines the specific deliverable. Role assigns the expertise and perspective the AI should bring. Expectation sets the measurable bar — how the output will be evaluated, what format it needs, and what success looks like in concrete terms.

Think of it like the difference between telling a consultant “we need a marketing plan” versus “we tried content marketing for two years with declining engagement, pivoted to events last quarter with 40% better lead quality, and now need a plan that builds on the events momentum while reactivating our content pipeline.” The second version — with real background — produces dramatically better recommendations.

Why Background Changes the Output

Without background, an AI generates advice for a generic version of your situation. With background, it generates advice for your specific situation. A marketing plan for a company that just pivoted from content to events looks completely different from one for a startup with no marketing history. The Background component forces you to provide the narrative that separates your request from every similar request the AI has ever seen — and that specificity is what produces genuinely useful output.

The BORE Process

Four components that ground prompts in history and measurable outcomes

1

Background — Provide the History

Go beyond the current situation to include the decisions, events, and constraints that led to this moment. Background answers “why now?” and “what has already been tried?” It includes organizational context, previous approaches, relevant data points, and inherited constraints. The richer the background, the more precisely the AI can tailor its response to your actual circumstances.

Example

“Our 3-year-old B2B SaaS company has 200 customers and $4M ARR. We grew 100% last year through outbound sales, but our CAC has doubled in the last two quarters. Our board is pushing for more efficient growth channels. We tried content marketing in Year 1 but abandoned it after 6 months with minimal results. Our engineering team just shipped a self-serve onboarding flow.”

2

Objective — State the Goal

Define the specific outcome you need. The objective should be concrete enough to evaluate — vague objectives produce vague responses. Focus on what you need to decide, deliver, or accomplish. Include any constraints on scope, timeline, or resources that bound the objective.

Example

“Design a product-led growth strategy that leverages our new self-serve onboarding to reduce CAC by 30% within two quarters while maintaining our current growth rate.”

3

Role — Assign the Expertise

Specify who the AI should be for this task. The role determines the vocabulary, analytical framework, and problem-solving approach the AI uses. In business contexts, roles are most effective when they combine domain expertise with a specific perspective or methodology — not just “marketing expert” but “VP of Growth who has scaled three SaaS companies through PLG transitions.”

Example

“Act as a VP of Growth who has successfully transitioned two B2B SaaS companies from sales-led to product-led growth. You favor data-driven experimentation over big-bet strategies, and you have experience managing board expectations during growth model transitions.”

4

Expectation — Define Measurable Success

Declare how the output will be evaluated and what form it should take. Expectations should be specific and, where possible, measurable. Include format requirements, audience specifications, quality benchmarks, and any metrics or criteria that define “good enough.” This component prevents the AI from delivering impressive-sounding output that does not actually serve your needs.

Example

“Deliver a 90-day PLG roadmap in phases (30/60/90 days). Each phase needs: key initiatives, success metrics with specific targets, required resources, and risks. Format as a structured document suitable for a board presentation. Include a section addressing why content marketing failed previously and how PLG differs. All projections should be clearly labeled as estimates requiring validation against actual company data.”

See the Difference

Why historical background produces more actionable strategies than surface-level context

Casual Prompt

Prompt

Help me create a hiring plan for my engineering team. We need to grow fast.

Response

Here is a hiring plan for your engineering team: 1) Define role requirements, 2) Post on LinkedIn and job boards, 3) Implement a structured interview process, 4) Offer competitive salaries, 5) Onboard effectively. Consider hiring a recruiter if you need to scale quickly.

Generic advice, no awareness of team size, budget, timeline, or past challenges
VS

BORE Prompt

Structured Prompt

B: 18-person engineering team (12 backend, 4 frontend, 2 DevOps). Lost 3 senior engineers to FAANG companies in the last 6 months. Two open reqs have been unfilled for 90+ days. Located in Austin, competing against remote-first companies offering SF salaries. Our technical interview pass rate is 8%, suggesting our bar may be miscalibrated.
O: Create a hiring strategy to fill 6 engineering positions (3 senior, 2 mid, 1 DevOps) within the next quarter.
R: You are a VP of Engineering who has scaled engineering teams from 20 to 80 people at two startups. You understand that hiring velocity and quality are often in tension.
E: Provide a phased plan (weeks 1–4, 5–8, 9–12) with specific actions, owner responsibilities, and success metrics per phase. Address the attrition problem and interview pass rate. Include a budget framework. Format for presentation to the CEO. All salary benchmarks and market data should be verified against current compensation surveys before making offers.

Response

Q2 Engineering Hiring Strategy — From Crisis to Pipeline

Phase 1 (Weeks 1–4): Stop the Bleeding
• Conduct stay interviews with remaining senior engineers to identify retention risks (Owner: Engineering Manager)
• Recalibrate interview process: analyze the 8% pass rate against industry benchmarks (15–25% typical). Likely issue: take-home assignment is filtering out employed senior candidates who cannot commit 6 hours...

Phase 3 (Weeks 9–12): Sustained Pipeline
Success Metrics: 6 offers extended, 4+ accepted, time-to-offer under 21 days...

Note: Salary benchmarks referenced are estimates. Verify against current Levels.fyi, Glassdoor, and Radford data for Austin market before finalizing offers.

History-aware, addresses root causes, phased with metrics, board-ready

Natural Language Works Too

While structured frameworks and contextual labels are powerful tools, LLMs are exceptionally good at understanding natural language. As long as your prompt contains the actual contextual information needed to create, answer, or deliver the response you’re looking for — the who, what, why, and constraints — the AI can produce complete and accurate results whether you use a formal framework or plain conversational language. But even in 2026, with the best prompts, verifying AI output is always a necessary step.

BORE in Action

See all four components working together across different business scenarios

BORE Prompt

Background: Our 300-person company currently uses Salesforce for CRM but our contract expires in 90 days. Sales team adoption is at 45% (down from 70% two years ago). The main complaints are complexity and slow performance. We evaluated HubSpot last year but rejected it due to limited custom reporting. Our data team has since built a standalone analytics layer that handles most reporting needs. Annual Salesforce spend is $180K.

Objective: Produce a vendor comparison and recommendation for our CRM renewal decision: renew Salesforce, migrate to HubSpot, or evaluate a third option.

Role: You are a CTO who has overseen three CRM migrations at mid-size companies. You evaluate tools based on total cost of ownership, adoption risk, and integration complexity rather than feature checklists.

Expectation: Decision matrix comparing 3 options across 6 dimensions: cost, adoption risk, integration effort, feature fit, migration complexity, and long-term scalability. Include a clear recommendation with rationale. Under 600 words. Format for a leadership team meeting. All pricing estimates should be verified against current vendor quotes before making decisions.

AI Output

CRM Decision Brief: Renew, Migrate, or Explore

Decision Matrix (1–5 scale, 5 = best)
| Dimension | Salesforce (Renew) | HubSpot | Attio (Alternative) |
| Cost | 2 ($180K/yr) | 4 (~$65K/yr est.) | 3 (~$95K/yr est.) |
| Adoption Risk | 2 (45% current) | 4 (simpler UX) | 3 (new tool learning curve) |...

Recommendation: Migrate to HubSpot
The reporting objection that killed HubSpot last year is no longer valid — your standalone analytics layer handles the custom reporting that HubSpot lacks natively. With adoption at 45% and declining, the real cost of Salesforce includes the revenue lost to a sales team that avoids its own CRM...

Note: All pricing figures are estimates based on typical mid-market contracts. Request formal quotes from each vendor before presenting final numbers to leadership.

BORE Prompt

Background: Q3 revenue came in at $2.1M, 15% below the $2.5M target we presented to the board last quarter. The miss was driven by two large enterprise deals that slipped to Q4 (combined $400K). Churn improved from 8% to 5.5%. We hired 8 people (6 engineering, 2 sales) and our runway is 14 months at current burn. The board includes two investors who are focused on path to profitability and one independent director who prioritizes growth.

Objective: Prepare talking points and a Q&A prep document for the CEO ahead of next week’s board meeting.

Role: You are a chief of staff at a Series B startup who specializes in board communication and investor relations. You know how to frame disappointing numbers honestly while maintaining board confidence.

Expectation: Produce: (1) 5 key talking points for the CEO’s opening remarks, (2) anticipated tough questions with recommended responses, (3) a “win narrative” that honestly contextualizes the revenue miss. Under 500 words. Tone should be transparent and forward-looking, not defensive. All financial figures should be verified against actual financial statements before the board meeting.

AI Output

Board Meeting Prep — Q3 Review

Opening Talking Points
1. “Revenue came in at $2.1M against a $2.5M target. I want to be direct about what happened and why I am confident about Q4.”
2. “The $400K gap is two specific enterprise deals that have signed LOIs and are in legal review. These are delayed, not lost.”
3. “Churn improved from 8% to 5.5% — our best quarter ever. Net revenue retention is now 112%.”...

Tough Questions
Q: “Are we burning too fast for this growth rate?”
A: “At current burn, we have 14 months of runway. If both Q4 deals close as expected, that extends to 16+ months...”

Note: Verify all financial figures, runway calculations, and deal status against current FP&A reports and CRM data before the board meeting.

BORE Prompt

Background: We launched our mobile app 6 weeks ago after 8 months of development. Pre-launch waitlist had 12,000 sign-ups. First-week downloads: 8,200. Current DAU: 1,100 (down from 3,400 in week 1). App store rating: 3.2 stars (target was 4.0+). Top complaint in reviews: “onboarding is confusing” (mentioned in 40% of 1–2 star reviews). We spent $45K on launch marketing. Our competitors’ apps average 4.3 stars.

Objective: Write a candid post-launch analysis that identifies what went wrong, what went right, and recommends next steps for the next 30 days.

Role: You are a VP of Product at a consumer tech company who has launched and iterated on 5 mobile apps. You believe in data-driven post-mortems and are known for honest assessments that teams trust.

Expectation: Structure as: What Worked, What Did Not, Root Cause Analysis, 30-Day Action Plan with priorities. Include specific metrics targets for the next 30 days. Under 700 words. Audience is the executive team. All metrics cited should be verified against actual analytics dashboards before circulating the report.

AI Output

Mobile App Launch Post-Analysis — Week 6

What Worked
• Pre-launch waitlist conversion: 68% of waitlist (8,200/12,000) downloaded in week 1 — strong demand signal
• Core functionality performs well: Users who complete onboarding show 4.1-star satisfaction in in-app surveys...

Root Cause
The 67% DAU decline (3,400 to 1,100) is primarily an onboarding problem, not a product problem. Users who get through onboarding retain well. The funnel data tells the story: 8,200 downloads → 5,100 completed registration → 2,800 started onboarding → 1,600 completed it. We are losing 69% of users before they experience the core product...

30-Day Targets
• Onboarding completion: 40% → 65%
• App store rating: 3.2 → 3.8
• DAU: 1,100 → 2,000...

Note: All funnel metrics and conversion rates should be verified against actual analytics data. Targets are based on industry benchmarks and may need adjustment based on your specific user demographics.

When to Use BORE

Best for business tasks where historical context and measurable outcomes define success

Perfect For

Strategic Business Planning

Growth strategies, go-to-market plans, and organizational decisions where past attempts and current constraints shape the right approach.

Executive Communication

Board presentations, investor updates, and leadership briefs where the audience expects both historical context and forward-looking metrics.

Post-Mortem Analysis

Launch reviews, incident analyses, and project retrospectives where understanding what happened before is essential to recommending what happens next.

Vendor and Technology Decisions

Tool evaluations, build-vs-buy analyses, and migration decisions where previous experiences and current constraints are critical inputs.

Skip It When

Greenfield Projects

Tasks with no relevant history — new product ideation, brainstorming, or exploratory research where there is no background to provide.

Creative or Tone-Sensitive Tasks

Marketing copy, brand voice content, or audience-specific communication where tone and style matter more than historical context. Use CO-STAR instead.

Quick Operational Tasks

Simple formatting, straightforward lookups, or routine tasks where the overhead of providing detailed background does not add proportional value.

Use Cases

Where BORE delivers the most value

Financial Strategy

Build budget proposals, fundraising strategies, and financial models where historical performance data and inherited constraints drive every recommendation.

Organizational Change

Design restructuring plans, role transitions, and process changes where understanding what was tried before prevents repeating past mistakes.

Product Strategy

Develop roadmaps, pivot analyses, and feature prioritization frameworks where product history and market evolution inform every strategic choice.

Investor Relations

Prepare pitch materials, quarterly updates, and investor memos where historical performance narrative and forward projections must be tightly integrated.

Risk Assessment

Evaluate operational, market, and technical risks where historical incident data and organizational vulnerability history are essential inputs.

Competitive Intelligence

Produce market analyses and competitive briefs where your company’s positioning history and previous competitive moves contextualize current strategy.

Where BORE Fits

BORE bridges surface-level context and comprehensive strategic briefing

Zero-Shot Raw Instructions Single request, no structure
CARE Outcome Focused Context, role, and success criteria
BORE History Grounded Deep background context with measurable expectations
CO-STAR Audience Centered Full communication brief with six dimensions
BORE for Business, CO-STAR for Communication

BORE and CO-STAR occupy different niches despite similar complexity. BORE excels when the task is analytical or strategic — business decisions, post-mortems, vendor evaluations — where historical context drives the quality of recommendations. CO-STAR excels when the task is communicative — marketing copy, stakeholder emails, presentations — where audience, tone, and style precision matter most. Choose based on whether your output needs to be strategically grounded or communicatively precise. And always verify the AI’s output against actual data before using it in decisions.

Build Your BORE Prompt

Structure your next business prompt with all four BORE components or find the right framework for your specific task.