Context Is the Moat You're Not Building
It's a Thursday afternoon in Q1. You're a CIO at a $600M regional carrier. Your claims AI pilot just got its quarterly review. Processing times are down 22% in the test environment. But the adjusters aren't using it.
Your VP of Claims doesn't sugarcoat it: "The system doesn't know how we work. It doesn't know our litigation thresholds in Florida, or that Maria in subrogation has a recovery playbook nobody's written down."
The vendor's model is fine. The problem is everything the model doesn't know.
The real advantage isn't data
Every carrier executive has said this in a board meeting: "Our data is our competitive advantage." It's half the picture.
Your data lives in systems. Policy records, claims histories, loss ratios. Structured and queryable. Every AI vendor in your inbox can plug into it. Your competitor down the road has the same kind of data in the same kind of systems.
What they don't have is your context.
Context is the institutional knowledge that makes your data meaningful. Your senior adjuster looks at a claim and knows in 30 seconds it's headed for litigation. She's seen the pattern 400 times. She recognizes the attorney, the injury type, the jurisdiction. Your best underwriter carries guidelines in her head about which risks look clean on paper but perform terribly in certain states. None of that is in a system.
That context is your competitive advantage. None of it is accessible to the AI tools you're investing in. Pilot after pilot succeeds in the sandbox and fails in production. Production runs on context that never made it into any system. The cost is a compounding tax on every AI initiative you attempt.
Here's what we learned.
After nearly a decade building and operating AI inside an insurance carrier, the gap between a good demo and working production AI is the context layer underneath the model.
1. Your process documentation is fiction
The SOP says claims triage takes three steps. In practice, adjusters coordinate through Slack DMs, check with supervisors informally, and route claims based on rules that exist nowhere in writing.
These shadow workflows are invisible in your transactional data. Your claims system shows "claim assigned to adjuster" as a single event. It doesn't show the 20-message Slack thread that preceded it. The adjuster asked three colleagues for input, checked a coverage edge case outside the manual, and made a judgment call based on a threshold nobody has documented.
If you're evaluating AI tools by feeding them your documented processes, you're building on a foundation that doesn't match reality. The first step is mapping what actually happens.
2. Your most valuable knowledge walks out the door every night
In every department at every carrier, one or two people are the actual operating system. They're in every Slack thread. CC'd on every escalation. When they leave, the process breaks.
Ask your underwriting team how many policies flagged for manual review get approved without any change. At most carriers, the number is higher than anyone expects. The automated rules kick them out. A human applies context the system doesn't have and waves them through. That judgment was never captured.
Every month you wait, that knowledge stays locked in people who won't work for you forever.
3. Context compounds. Its absence compounds faster.
Structuring operational context creates a compounding advantage. Each workflow you map makes the next one faster. Each decision pattern you capture makes AI more accurate across adjacent processes.
The carrier that structures their claims context finds that much of it carries over to underwriting. The same parties, policies, and regulatory frameworks show up across departments. The knowledge that seems siloed turns out to share a common foundation — and the carrier-specific layer on top is smaller than most executives assume.
The absence compounds in the other direction. Every AI project that starts from zero costs more than the last one. Industry research consistently shows new insurance professionals need 12 months or more to develop the judgment their tenured colleagues carry — and that knowledge transfer gap only widens as experienced staff retire. Vendor engagements keep reinventing the wheel.
Ask your team how much of your last AI engagement was spent on "understanding the business" versus actually building. If the answer is more than 40%, you have a context problem.
4. The answer is a knowledge layer
More powerful models and more expensive consultants don't solve this. A bigger model still doesn't know your Florida litigation thresholds. A consultant maps your processes for six months, leaves a deck, and the knowledge starts decaying the day they walk out.
What works is a structured knowledge layer. A persistent map of how your organization actually operates. Built from observation, not documentation. Maintained as a living system, not a point-in-time report.
We've been building this at Dearborn Labs — a context layer that captures what's common across insurance operations and adapts the part that makes your operation yours.
The next time an AI vendor tells you their model is the differentiator, ask where the context comes from. If they say "your data," that's not enough. Your data is what every vendor plugs into.
So what changes
The first question in carrier boardrooms should be: "Have we structured the operational context that makes AI actually work here?"
Data is infrastructure; every carrier has it. Context is the differentiator. The institutional knowledge, the decision patterns, the real workflows your people follow. Right now, at most carriers, that context is locked in senior employees' heads and scattered across Slack threads and undocumented workarounds.
The carriers who treat context as a strategic asset compound advantages that are hard to replicate. The advantage compounds because institutional knowledge is proprietary in a way technology never is.
The ones who don't will keep paying the context tax. And they'll keep wondering why the demos that work in the conference room don't work in production.
// Key Questions
Why do insurance AI pilots succeed in testing but fail in production?
Insurance AI pilots fail in production because the sandbox doesn't reflect how work actually happens. Production runs on context that never made it into any system — the state-specific litigation thresholds a senior adjuster holds in her head, the subrogation playbook nobody documented, the underwriting overrides that happen via Slack DM. The model is usually fine. The gap is everything the model doesn't know about how your specific carrier operates. That operational context is the missing layer between a working demo and a working production system.
What is a context layer in enterprise AI?
A context layer is a structured, persistent map of how an organization actually operates — capturing shadow workflows, decision patterns, institutional knowledge, and the unwritten rules that drive real outcomes. Unlike process documentation or data warehouses, a context layer is built from observation rather than documentation, and maintained as a living system rather than a point-in-time report. It's what translates a generic AI model into one that understands your Florida litigation thresholds, your underwriting deviations, and the judgment calls your senior staff make every day.
Why is data alone not enough of a competitive advantage for AI?
Every carrier has structured data in claims systems, policy admin systems, and data warehouses. Every AI vendor can plug into it. Your competitor down the road has the same kind of data in the same kind of systems. What differentiates carriers is the institutional knowledge that makes that data meaningful — the patterns senior employees recognize, the decision frameworks that exist nowhere in writing, the operational judgment developed over decades. Data is infrastructure; context is the differentiator.
What are shadow workflows in insurance operations?
Shadow workflows are the undocumented processes that run beneath official SOPs — coordination through Slack DMs, informal supervisor check-ins, routing decisions based on rules that exist nowhere in writing. Your claims system shows 'claim assigned to adjuster' as a single event, but that event was often preceded by a 20-message thread, three colleague consultations, and a judgment call based on a threshold nobody has documented. These workflows are invisible in transactional data but drive the majority of real operational outcomes, which is why AI tools trained only on documented processes fail in production.
How does institutional knowledge decay in insurance carriers?
Institutional knowledge decays through attrition. In every department at every carrier, one or two people are the actual operating system — in every Slack thread, CC'd on every escalation, applying context the formal system doesn't capture. When they leave, the process breaks. Industry research shows new insurance professionals need 12 months or more to develop the judgment their tenured colleagues carry, and the knowledge transfer gap widens as experienced staff retire. Every month a carrier waits to structure this knowledge, more of it walks out the door permanently.
How can insurance carriers measure whether they have a context problem?
A practical diagnostic: ask how much of your last AI engagement was spent on 'understanding the business' versus actually building. If the answer is more than 40%, you have a context problem. Another signal is how many policies flagged for automated manual review get approved without any change — at most carriers this number is higher than expected, which means humans are routinely applying context the system lacks, and that judgment isn't being captured. A third signal is how often AI pilots succeed in testing and fail in production. All three point to the same underlying issue: the context layer underneath the model doesn't exist yet.