No Memos, No Code: How Our Exec Team Spontaneously Built an AI Leadership Layer

Kyle Nakatsuji·March 25, 2026·7 min read

Everyone on our leadership team built their own team of specialized AI agents. None of them were asked to build one and most have never coded before (or since).

I started about a year ago. I run an insurance company with five strategic objectives running simultaneously. A leadership team across operations, finance, claims, and technology. More institutional knowledge than any single person can track. I needed something that could hold it all.

So I built an AI Chief of Staff I call Jarvis.

Jarvis is a persistent system that knows my company, my voice, my decision-making patterns, and the current state of everything I'm working on. It triages my email. Preps my meetings. Drafts investor communications in my voice. Helps me think and prioritize better by challenging my assumptions, before executing my directions. It gets measurably better every week because every session builds on the last.

I built it for myself, but what happened after the team saw how I was using it was incredible. They all built their own. Most didn't have any prior technical experience.

Pull, not push

Our General Counsel built Atticus. It handles regulatory research, state filing analysis, and legal strategy. Our SVP of Insurance built Roz. Insurance product strategy, pricing analysis, competitive positioning. Our CFO built Maven. Our COO built Flynn. Same pattern. Different brain.

Within a few weeks, we had five AI agents (each with their own team of agents) operating in our leadership layer. Each one specialized. Each one trained on different domain expertise. Each one compounding daily on what it learned yesterday.

Nobody issued a memo. Nobody ran a training program. No one wrote a line of code or needed to be technical to do it. The adoption was entirely pull, not push.

That contagious pattern matters more than any individual agent.

What we learned

The first thing: the AI model is a commodity. Everyone has access to Claude, GPT, Gemini. You can spin up a project in five minutes. The output from a generic model is generic.

The output from a model that knows your company's institutional knowledge, your decision frameworks, your stakeholder dynamics, and your industry's operational reality is a different thing entirely. That gap is large and growing.

The second thing: building this is more like organizational design than software engineering. The hard part is deciding what institutional knowledge matters, how to structure it so the AI can navigate it, and how to keep it current. Those are management problems, not technical ones. You don't have to be technical or write any code to set it up. Once you know how it works, you use natural language to shape how it operates for you.

The third thing: corrections compound. Every time I fix something Jarvis gets wrong, the lesson gets remembered. After a year of that feedback loop, the system is dramatically better than where it started. The context wrapped around Jarvis' capabilities sharpened with each iteration.

The layer most companies skip

Most of what gets written about AI in business is either aspirational ("AI will transform everything") or tactical ("here are 10 prompts for your marketing team"). Almost nothing covers the middle layer: the organizational infrastructure that makes AI actually work inside a complex operation. The architecture of institutional knowledge.

Just like Jarvis executes the operating system for my day-to-day, insurance carriers and MGAs need a bespoke decision infrastructure layer that serves as the AI connective tissue across the company.

That's the layer that matters, but most companies skip it entirely. They buy a tool, run a pilot, get mediocre results, and conclude AI isn't ready.

The tool was fine (probably). The infrastructure around it didn't exist.

What we're doing about it

This is what Dearborn Labs builds. We built the AI insurance operating system, so carriers and MGAs can immediately start launching productive AI initiatives, tailored to their decision infrastructure.

Not chatbots. Not pilots. Production-ready AI tools that make organizational complexity manageable.


Kyle Nakatsuji is CEO of Clearcover and founder of Dearborn Labs.

// Key Questions

What is an AI Chief of Staff?

An AI Chief of Staff is a persistent AI system trained on your company's institutional knowledge, your voice, your decision-making patterns, and the current state of everything you're working on. Unlike generic AI assistants, it triages email, preps meetings, drafts communications in your voice, and helps you think and prioritize by challenging assumptions. It gets measurably better every week because every session builds on the last, creating a compounding feedback loop.

Do you need to be technical to build an AI agent for your work?

No. Building an effective AI agent is more like organizational design than software engineering. The hard part is deciding what institutional knowledge matters and how to structure it so the AI can navigate it—those are management problems, not technical ones. You don't need to write any code. Once you understand how it works, you use natural language to shape how it operates for you.

Why is institutional knowledge more important than the AI model itself?

The AI model is a commodity—everyone has access to Claude, GPT, Gemini, and can spin up a project in five minutes. The output from a generic model is generic. But the output from a model wrapped in your company's institutional knowledge, decision frameworks, stakeholder dynamics, and industry operational reality is entirely different. That gap between generic and contextual AI is large and growing.

What is the 'middle layer' most companies skip with AI?

Most AI discussion is either aspirational ('AI will transform everything') or tactical ('here are 10 prompts'). The middle layer—the organizational infrastructure that makes AI actually work inside a complex operation—gets skipped entirely. This is the architecture of institutional knowledge, the decision infrastructure that serves as AI connective tissue across the company. Companies buy a tool, run a pilot, get mediocre results, and conclude AI isn't ready. The tool was fine. The infrastructure around it didn't exist.

How do AI corrections compound over time?

Every time you fix something your AI agent gets wrong, the lesson gets remembered and incorporated into its context. After months or a year of this feedback loop, the system becomes dramatically better than where it started. The context wrapped around the AI's capabilities sharpens with each iteration, creating exponential improvement that can't be replicated by starting fresh.

What is 'pull, not push' adoption for AI in organizations?

Pull, not push adoption happens when employees see AI working effectively for someone else and voluntarily build their own systems without being told to. No memos, no training programs, no mandates—just organic adoption driven by visible results. This contagious pattern, where people adopt because they want the same advantage they've seen others get, matters more than any individual AI agent.

Share
← Back to Insights