The long version

The Seven Frames

How Sevenframe really works. The design choices, the engineering constraints, the philosophical lineage, and the places the method still gets things wrong. For readers who want the unfiltered version.

About 20 minutes if you read carefully. Skim with the table of contents if you don't.

Chapter 1

Why seven

There is nothing sacred about the number seven. It isn't drawn from scripture, lucky numerology, or an executive's gut feel in a brainstorming session. It is the smallest set of analytical lenses we found that covers a situation without leaving predictable holes and without restating itself.

Five fails in consistent ways. Cut any single frame and you discover that a whole category of insight disappears with it. Drop the human side, and you produce plans that are technically correct and socially impossible — the kind that die in implementation. Drop the reality check, and you generate confident answers built on untested assumptions. Drop the modal frame — what could be different — and every solution stays trapped inside the constraints the user already accepted. The gap isn't theoretical. You can feel the hollow place in a five-frame report.

Ten or more fails differently. By the eighth or ninth frame, the model starts rephrasing its earlier observations in new vocabulary — what reads as additional depth is usually additional words. The marginal frame is restatement dressed as insight, and it dilutes the genuinely unique work the earlier frames did.

Six of the seven frames correspond to long-running traditions in philosophical inquiry: ontology (what exists), modal logic (what's possible), mereology (how parts relate), phenomenology (how it's experienced), philosophy of science (what explains the pattern), and epistemology (whether the reasoning holds up). Each of these traditions spent centuries developing questions its neighbors couldn't answer. They earn their seat by the work they do, not by their pedigree — but the pedigree helps, because the questions have been pressure-tested against harder cases than we could invent on our own.

The seventh frame does not belong to a philosophical tradition. It is the discipline of synthesis: taking six partial pictures and producing a single, honest, prioritized move. That is a distinct operation from the six that precede it, and it does not exist in any one school. It is its own thing.

The short answer: seven is the number that stopped the model from rhyming with itself without leaving obvious gaps. It is an empirical answer, not a mystical one. If we find a case where six is enough or eight is necessary, we'll change it. So far, seven holds.

Chapter 2

The order is the argument

You cannot shuffle the frames. The sequence is load-bearing: each frame depends on the groundwork laid by the frames before it and earns the right to its particular contribution by when it arrives.

First: what exists. You cannot reason about what could be different from a situation you haven't yet mapped. Every later step — possibility, connection, experience, pattern, test — builds on a named inventory of the thing. Most bad analysis skips this step and argues about a situation it never actually described.

Second: what could be different. Once you have a map, you ask which features of it are real constraints and which are assumptions that have hardened into constraints through habit. Doing this second is important: jump here first and your “possibilities” float free of the actual situation. Do it after the map, and they become moves you could actually make.

Third: how everything connects. Only once you have things and possibilities can you ask about relationships between them. This frame needs a full object list to work — a map of cause and effect is a map of named things, not of abstractions. Placed earlier, it produces connections between concepts nobody has defined; placed here, it produces connections between the named parts from Frame 1 and the possibilities from Frame 2.

Fourth: the human side. Structure, possibility, and connection are all third-person descriptions. The fourth frame turns to first-person experience: how the people inside this situation actually feel about it. We put this after the structural frames on purpose. Lead with human experience and you produce empathy without understanding; lead with structure and you produce understanding without empathy. Doing both in order — structure first, experience after — is how you produce a recommendation that is both correct and actionable.

Fifth: the bigger picture. After four frames of specific observation, the fifth frame pulls back and asks what kind of situation this is. What pattern is at work? What well-known dynamic explains what the earlier frames found? This frame needs everything that came before as raw material — it can't recognize a pattern without data. Put it earlier and it imposes theory on the situation; put it here and it explains the situation you've already examined.

Sixth: reality check. Only once you have an actual thesis — constructed across five frames — can you test it. The sixth frame exists to pull on the seams of the previous five. Which claims rest on solid ground? Which are inferences standing in for facts? What assumption, if wrong, would change the whole picture? A reality check placed earlier has nothing substantial to check.

Seventh: synthesis. Only at the end, with six complete and tested perspectives available, does synthesis become possible. This frame does the work no earlier frame can: it names where the frames converge, where they conflict, and what the honest next move looks like when you hold all of it at once. You cannot synthesize what hasn't been fully described yet.

The order isn't a convention. It is the argument for why the method produces a different kind of output than a single-pass analysis.

Chapter 3

Anatomy of a frame

Each of the seven frames is a system prompt sent to Claude along with your input and, for frames after the first, the results of the previous frames. The prompts are not kept secret. Here is the real directive for Frame 1 (OQCS — What Exists), condensed and annotated for readability.

You are now performing Frame 1: OQCS — What Exists. Take a thorough look at everything that makes up this subject. Your goal is to lay out the full picture — all the moving parts, the players involved, the rules (written and unwritten), and the overall structure. Start by anchoring in what the user actually told you. Before inferring anything, identify the specific facts and details they provided. Build outward from those. When you identify something the user mentioned explicitly, state it with confidence. When you go beyond what they said — inferring structure, naming forces, or identifying gaps — signal that clearly: "based on what you've described..." or "you didn't mention [X], but it likely plays a role..." Go beyond the obvious where you can, but be transparent about the difference between what you know from the input and what you're inferring. When the user hasn't provided specificity, say what you'd need to know rather than inventing plausible details. Where people disagree about what something is or how to categorize it, call that out — those disagreements usually point to the most important issues. Write your findings as natural, flowing paragraphs that help the reader see the full anatomy of the situation with fresh eyes. Produce your output in the standard four-section format: HEADLINE, KEY INSIGHTS (3-5 bullets), DETAILED ANALYSIS, and CONFIDENCE NOTE.

There are six things happening in that prompt, and each one is load-bearing.

The identity line. “You are now performing Frame 1: OQCS — What Exists.” This tells the model which frame it's in. It sounds trivial. It isn't — without it, the model drifts toward generic “analyze this situation” output. Naming the frame forces a specific analytical posture.

The scope line. “Lay out the full picture — all the moving parts, the players, the rules, the structure.” This defines what the frame is for, in the same plain language a friend would use. It is deliberately concrete: “players,” “rules,” “structure” rather than “ontological entities,” “normative constraints,” “topological relations.” The model mirrors the register it receives.

The anchoring instruction. “Start by anchoring in what the user actually told you.” This is the single most important line in the prompt. LLMs left to themselves will pattern-match to similar situations they've seen and describe those instead of the one in front of them. The instruction is a counter-weight: start with what the user said, end elsewhere.

The grounding taxonomy. “When you identify something the user mentioned explicitly, state it with confidence. When you go beyond what they said, signal that clearly.” This establishes a three-tier evidence grammar: stated, inferred, assumed. Every later instruction depends on the model maintaining that distinction.

The “say what you'd need” move. “When the user hasn't provided specificity, say what you'd need to know rather than inventing plausible details.” This is the model's permission to admit ignorance. Without it, LLMs confabulate. With it, you get honest gaps instead of confident fiction.

The output format. “HEADLINE, KEY INSIGHTS (3-5 bullets), DETAILED ANALYSIS, CONFIDENCE NOTE.” The four-section format is the subject of the next chapter.

The other six frame prompts follow the same skeleton — identity, scope, anchoring, grounding, output format — but each adds frame-specific instructions and, crucially, a line that reads roughly “do not repeat what earlier frames covered.” That discipline is what makes seven frames in sequence produce seven different contributions instead of seven rephrasings of the same observation.

Chapter 4

The four-section format

Every frame produces output in the same four-section structure: a HEADLINE, a short list of KEY INSIGHTS, a longer DETAILED ANALYSIS, and a CONFIDENCE NOTE. The format is deliberate. Each section does work the others can't.

Headline. One sentence. A testable claim about the situation, not a summary of the analysis. A good headline takes a position: “You have more runway than you think, but less product validation than you need.” A bad headline describes: “This analysis examines the founder's financial and product situation.” Headlines are the first test of whether a frame has anything to say.

Key insights. Three to five bullet points, each a compact claim. Insights are the scannable takeaway for readers who don't have time for the full analysis. The constraint — between three and five — is important: fewer than three and the frame hasn't said enough to be useful; more than five and the insights become a bullet-point blur. Insights are not a table of contents for what the analysis will discuss. They are the analysis's conclusions, stated bare.

Detailed analysis. Prose. Several paragraphs. This is the reasoning that earns the insights. It's where the causal chains are walked, where the specific details are surfaced, where the hedges and conditions are named. The analysis is longer than the insights because nuance takes words, and because a reader who disagrees with an insight needs the reasoning to know where to push back.

Confidence note. One short paragraph explicitly naming which parts of the frame are grounded in what the user said, which are inferred from typical patterns, and which are assumptions the reader should verify. Most AI-generated content pretends to know more than it does; the confidence note is a hard-coded counter-weight. Every frame ends with one. They are a feature, not a disclaimer.

Pure prose fails because it buries the takeaway. Pure bullets fail because they skip the reasoning. A numeric confidence score fails because it implies precision the model doesn't have. The four-section format keeps all of those failure modes in check at once — and, importantly, it keeps every frame comparable to every other frame. You know exactly where to look in a frame report for the bold claim, the scannable summary, the reasoning, and the honest caveat, because every frame puts them in the same place.

Chapter 5

How the frames stay honest

Three mechanisms do most of the honesty work. Each one exists because, without it, a specific failure mode creeps in.

Evidence grounding. Every frame directive instructs the model to distinguish three kinds of claim: things the user said explicitly, things being inferred from what they said, and things being assumed from general pattern knowledge. In practice this shows up as phrases like “you mentioned,” “based on what you described, it likely,” and “if [X] is true, this suggests.” The discipline is not perfect — LLMs sometimes state an inference as a fact — but it is the difference between a report that tells you where the ground softens and one that pretends every step is bedrock.

Anti-repetition. Frames after the first receive a compressed summary of earlier frames — headlines and key insights from Frames 1–3 get passed forward to Frame 4, then headlines only forward from Frame 5 — along with an explicit instruction: “Do not revisit what earlier frames covered. Your unique job is [specific contribution].” Without that instruction, the model treats each frame as an opportunity to restate the full situation. The compression keeps enough context for coherence; the instruction keeps each frame doing new work.

Confidence notes. Every frame ends with one. This is the forcing function for honesty: the model can't finish without naming where the analysis is weakest. When the confidence note reads “this depends on [X], which you didn't specify,” that's the frame flagging its own soft ground. A reader who skips the confidence notes is giving up the most important part of the report.

There's a fourth, subtler mechanism: the voice rules. Every directive includes explicit instructions against jargon (“leverage,” “paradigm,” “stakeholders” are banned by name), against hedging vocabulary that manufactures false balance (“on one hand... on the other” without content), and for a specific register: “like a thoughtful friend giving practical guidance over coffee.” Tone calibrates honesty. The “expert oracle” voice invites false confidence; the “hedging academic” voice invites performative uncertainty; the friend register is the one that can deliver a hard truth without condescension and admit a limit without apology.

None of these mechanisms produce perfect honesty. They produce better-than-default honesty — which is the best we know how to do with a large language model right now. The confidence notes are the line of last defense. When in doubt, trust those more than the insights.

Chapter 6

Failure modes

The method is not magic. Frames fail in recognizable ways. Learning to spot the failure modes makes you a better reader of Sevenframe output. All three examples below are real patterns observed during testing, paraphrased to protect specific analyses.

Failure 1: The generic frame. A short, vague input (“I'm thinking about changing careers”) produces a frame that reads like it could have been written about anyone. Symptoms: heavy use of phrases like “many people in your situation,” “common dynamics include,” or “depending on your circumstances.” The frame isn't wrong, but it isn't about you. Diagnosis: the input didn't give the model enough to anchor in, so the analysis drifted to general knowledge. Fix: use the Go Deeper action to answer the questions the model raises, or submit a longer input with specifics.

Failure 2: The overconfident frame. A frame states something with authority that it cannot actually know from the input. For example: “Your team is experiencing burnout,” when the user never said anything about their team's emotional state. Symptoms: specific claims about things not mentioned in the input, delivered without the “based on typical patterns” hedging. Diagnosis: the model pattern-matched a common dynamic from training and projected it onto your situation. Fix: cross-reference claims with your input — if the model is describing something you didn't mention, treat it as a hypothesis, not a finding. The confidence note at the end of the frame is your best check.

Failure 3: The restatement frame. A late frame (usually Frame 5 or 6) revisits ground earlier frames already covered, often in slightly different language. Symptoms: you find yourself thinking “didn't I already read this?” while reading the analysis. Diagnosis: despite the anti-repetition instructions, the model sometimes drifts — especially when earlier frames produced strong insights that feel central. Fix: in this case, the frame hasn't done its unique job. The synthesis frame (Frame 7) is your backstop — even if a middle frame restates, the synthesis is forced to name novel convergences rather than just repeat. If you're seeing heavy restatement, the confidence note in the offending frame usually acknowledges it.

There are more failure modes — input that's adversarial (trying to break the frame structure), input that's too structured (already in a six-slide deck format and missing the ambiguity the frames thrive on), input that combines many unrelated questions (the frames lose focus across them). But those three are the common ones.

The meta-point is this: we don't hide that the system has failure modes. We name them so you can recognize them. A tool that claims to be right 100% of the time is either dishonest or trivial. The honest claim is that Sevenframe is usually a significantly better first-pass analysis than a single-perspective one, and that it tells you when it's reaching.

Chapter 7

Architecture honesty

A few engineering choices shape the product in ways worth naming. None of them are secrets, but a lot of AI products hide their trade-offs, and we don't intend to.

One frame at a time. When you submit an analysis, Sevenframe runs the seven frames in sequence, not in parallel, with a client-side poll kicking off each next frame. The reason is boring: we run on Vercel's Hobby plan, which caps each serverless function at 60 seconds. A single frame takes 10 to 20 seconds to complete. All seven in one call would consistently time out. Breaking the work into seven separate invocations keeps every call comfortably inside the budget.

A side effect of this architecture is that the UI can show you frames as they complete, rather than leaving you staring at a loading spinner for a minute. The engineering constraint produced a better user experience by accident. But it is still an engineering constraint.

Sonnet over Opus. Every frame is generated by Claude Sonnet 4, not Claude Opus. Opus is marginally better at certain kinds of writing, but it's noticeably slower and more expensive. For the advisor-voice register Sevenframe uses — plain language, grounded claims, honest hedging — Sonnet produces output indistinguishable in quality from Opus at a fraction of the latency and cost. We tested both. For this specific task, Sonnet is the right choice.

Haiku for Sevenbot. The chat companion runs on Claude Haiku rather than Sonnet. Chat is a different task than frame generation: the model needs to respond quickly to keep the conversation flowing, and the responses are generally shorter and more reactive. Haiku is purpose-built for that shape of work. Using a heavier model for chat would feel laggy and cost more to operate — so we don't.

Prior-context compression. Every frame after the first receives a summary of the earlier frames. We don't pass full frame outputs forward — that would consume huge amounts of context and slow down later frames. Instead, Frames 2 through 4 receive headlines and key insights from the previous frames; Frames 5 through 7 receive headlines only. This compression is deliberate. Headlines preserve what each earlier frame concluded without forcing later frames to read and work around hundreds of words of detail. The compression loses some nuance, but the gain in focus is worth it.

Token budgets by depth. Overview analyses budget 1,500 tokens per frame; Standard 2,500; Deep Dive 6,000. These numbers are tuned, not arbitrary. Below 1,500 tokens the model starts omitting the analysis section; above 6,000 it starts padding. The three depth levels produce genuinely different quality outputs — a Deep Dive is not just a longer Overview — but all three follow the same four-section format. If you find Overview sufficient for your input, that's a good sign your input is well-structured. If you keep reaching for Deep Dive, that's also useful signal.

None of these choices are permanent. If infrastructure or models change, the choices might change with them. The point of naming them here isn't to defend them forever — it's to make the trade-offs visible to anyone reading this page.

Chapter 8

Provenance

Six of the seven frames echo long-running traditions in philosophical thought. The lineage isn't a credential — the frames earn their place by the work they do, not by who thought of them first — but naming the sources is useful for readers who want to go further, and it clarifies what each frame is and isn't. None of the attributions below are claims of strict derivation; the frames are remixes, simplified for practical use. Use these names as starting points for your own reading.

OQCS — What Exists

Ontology · Aristotle (Categories, Metaphysics); W. V. O. Quine (“On What There Is”)

Ontology asks what kinds of things exist and how they fit into categories. Aristotle's basic move — distinguishing substance from accident, kind from instance — remains foundational. Quine sharpened the modern question: “to be is to be the value of a bound variable” — we commit to an ontology whenever we state what something is. OQCS is the practical descendant: before you analyze anything, name what you're actually talking about.

OMM — What Could Be Different

Modal logic · Saul Kripke (Naming and Necessity); David Lewis (possible worlds semantics)

Modal logic is the formal study of necessity and possibility. Kripke's possible-worlds framework — reasoning about a claim by considering how the world would have to differ for it to be false — is the closest ancestor of OMM. Lewis's modal realism pushed the idea further. The frame lifts the question from formal logic into practical decision-making: which features of your situation are genuinely fixed, and which only feel that way?

OMP — How Everything Connects

Mereology & systems theory · Stanisław Leśniewski (mereology); Ludwig von Bertalanffy (general systems theory); Donella Meadows (Thinking in Systems)

Mereology is the formal study of parts and wholes. Systems theory overlaps it, adding dynamics — feedback loops, stocks and flows, leverage points. Meadows' work on systems thinking is the most accessible practical application. OMP inherits both lineages: it asks how the parts you identified in OQCS affect one another, where the high-leverage points are, and where the system will resist your interventions.

EM — The Human Side

Phenomenology · Edmund Husserl (Ideas, Logical Investigations); Maurice Merleau-Ponty (Phenomenology of Perception)

Phenomenology is the philosophical study of first-person experience — what things feel like from the inside. Husserl founded it; Merleau-Ponty extended it to the body and perception. The frame takes the phenomenological turn seriously without the technical vocabulary: every analysis has to account for how real people experience the situation, or it produces plans that fail on contact with human beings.

TM — The Bigger Picture

Philosophy of science; meta-theory · Thomas Kuhn (The Structure of Scientific Revolutions); Imre Lakatos (research programmes); Karl Popper (Conjectures and Refutations)

The philosophy of science studies how explanatory frameworks rise and fall. Kuhn's paradigms, Lakatos's research programmes, Popper's falsifiability — all offer different answers to the question “what makes a theory valid?” TM doesn't resolve that debate; it invokes the spirit of it. The frame asks: what kind of situation is this, and which known pattern explains why it works the way it does?

TA — Reality Check

Epistemology · Karl Popper (falsifiability); W. V. O. Quine (Two Dogmas of Empiricism); Susan Haack (foundherentism)

Epistemology is the theory of knowledge — how we know what we know and how confident we're entitled to be. Popper's insistence that a theory earns its keep by being falsifiable, Quine's attack on the strict analytic/synthetic divide, Haack's middle path between foundationalism and coherentism — all inform the frame's stance: every claim has a supporting structure, and the supporting structure can be weaker than the claim suggests.

GMS — Bringing It All Together

No single source · Hans-Georg Gadamer (hermeneutic circle); Aristotle (phronesis, practical wisdom); contemporary decision theory

Synthesis is the one frame without a single philosophical home. It draws on the hermeneutic tradition — Gadamer's circle, in which understanding the whole requires understanding the parts, and vice versa — and on Aristotle's concept of phronesis, practical wisdom in action. Contemporary decision theory contributes the discipline of committing to a choice under uncertainty. None of these alone is synthesis; together they gesture at it.

Chapter 9

Glossary

Short definitions of the terms Sevenframe uses in a specific way. Where a term is commonly misused, we've noted it.

Frame
One of seven structured analytical perspectives: OQCS, OMM, OMP, EM, TM, TA, GMS. A frame is not a category or topic — it's a posture the analysis takes, like a camera angle. The same subject looks different from each frame. (Misused as: synonym for “feature” or “section.” A frame is a perspective, not a bucket.)
OQCS
Frame 1. Plain-English name: What Exists. Asks what the situation is actually made of — the players, the rules, the structures. The foundational frame; every later frame builds on its inventory.
OMM
Frame 2. Plain-English name: What Could Be Different. Separates what is genuinely fixed from what only feels fixed. The frame of possibility — but always grounded in the situation OQCS established.
OMP
Frame 3. Plain-English name: How Everything Connects. Traces cause-and-effect, feedback loops, and leverage points between the parts OQCS named. (Not to be confused with a stakeholder map — OMP is about dynamics, not static roles.)
EM
Frame 4. Plain-English name: The Human Side. First-person experience: motivations, fears, identity dynamics, emotional labor. The frame where most strategies quietly fail.
TM
Frame 5. Plain-English name: The Bigger Picture. Identifies the known pattern or dynamic that explains the situation. Not generic wisdom — specific patterns with names.
TA
Frame 6. Plain-English name: Reality Check. Stress-tests the earlier frames. The frame that pulls on seams. (Not pessimism — TA aims to strengthen the plan by finding what would break it.)
GMS
Frame 7. Plain-English name: Bringing It All Together. Synthesis — naming where frames converge, where they clash, and what the honest next move is. The only frame whose job is to produce a decision.
Confidence note
The mandatory closing paragraph of every frame. Names which parts rest on the user's stated facts, which are inferred, and which are assumptions that would change the analysis if wrong. Readers who skip these are giving up the most honest part of the report.
Go Deeper
An action you can take on any completed frame. Sevenframe generates 3–5 clarifying questions targeting the frame's weakest points. After you answer, the frame is re-run with your answers as additional context, producing a meaningfully deeper output. Costs 1 credit.
Cross-Domain
An action that translates your completed analysis into a different domain — e.g., taking a business analysis and re-framing it through a philosophical or creative lens. Surfaces parallel structures that single-domain thinking misses. Costs 2 credits.
Roadmap
A phased execution plan auto-generated from your analysis. Always free and always included — produced automatically when the seventh frame completes.
Sevenbot
A session-based chat companion available from the dashboard. Aware of your recent analyses. Useful for exploring what a frame meant, stress-testing a recommendation, or thinking out loud. 2 credits per session.