header image

Building with AI: Here's What No Briefing Will Tell You

  • Executives making AI decisions without hands-on building experience have a comprehension gap that no briefing can close.
  • AI is rapidly eroding most traditional competitive moats, and proprietary data's real value now comes down to how long it would take a competitor to reconstruct it.
  • As AI equalizes development speed, the most valuable engineers are those with sharp judgment and companies need to actively protect the foundational skills that make that judgment possible

I've spent the last three months building with AI. Not reading about it. Not sitting through vendor demos. Not nodding along to board presentations with gradient-colored slides about "transformation." Building. Writing code. Deploying applications. Breaking things. Shipping things.

What follows are four gaps I didn't fully understand until I was in it. They won't show up in an analyst report. They're the kind of thing you only see when your hands are on the keyboard.

1. The Comprehension Gap: You Can't Lead What You Haven't Touched

Here's the uncomfortable truth. If you're an executive making decisions about AI (hiring, budgets, vendor selection, risk tolerance) and you haven't personally built an agentic workflow, you are making those decisions partially in the dark.

That's not a knock. It's a structural problem. The velocity of change in AI tooling is so extreme that even well-briefed leaders develop a comprehension gap between what they've been told is possible and what is actually possible right now, today, on their laptop.

This isn't just my observation. BCG's AI Radar report found that C-level executives deeply engaged with AI are 12 times more likely to be among the top 5% of companies winning with AI innovation. Twelve times. Meanwhile, Larridin's research identified what they call the "AI leadership gap," finding that 81% of business leaders are confident in their AI oversight, yet 75% of practitioners believe leadership underestimates the difficulty of AI execution. That delta isn't a communication problem. It's a comprehension problem.

Briefings don't close this gap. Conferences don't close it. You close it by building something. Anything. A workflow that pulls data, reasons over it, and takes action. Once you've done that, every subsequent conversation about AI shifts. You start asking better questions. You start spotting inflated vendor claims. You start understanding the difference between a demo and a product.

Hg Capital's Silicon Valley Leadership Summit put it bluntly: executives who delegate AI understanding to technical teams and maintain a comfortable distance from the messy reality of adoption become obstacles, not leaders.

The bottleneck for most executives isn't information. YouTube is free. The bottleneck is structured reps with feedback. Hire an AI coach. Not a consultant who hands you a PDF. A coach who sits with you for an hour a week, gives you small builds to complete, and accelerates your intuition. The ROI on that investment is asymmetric. A few focused hours per month can fundamentally change how you evaluate every AI-related decision that crosses your desk.

2. The Moat Durability Gap: Your Competitive Advantages Have an Expiration Date

This is the one that should keep executives up at night.

AI is compressing the durability of nearly every traditional competitive moat. Code advantages are largely gone (AI can replicate most software logic in hours). Process advantages are eroding fast. Data moats are the last ones standing, but even those are more fragile than most leaders assume.

Morningstar's analysis found that four of the five classic competitive moat pillars (switching costs, network effects, intangible assets, and efficient scale) now have almost no predictive power in today's AI environment. And companies most exposed to AI disruption have underperformed the most AI-resilient companies by nearly 26 percentage points in early 2026.

Here's the reframe: proprietary data's value is now roughly correlated to the time it would take a competitor, augmented by AI, to source or duplicate it. That's the new formula. Not "do we have unique data?" but "how long would it take someone with modern tools to reconstruct what we have?"

For some organizations, the answer is still "years." That's a real moat. For others, the honest answer is "weeks." That's a press release, not a strategy. Morgan Stanley's November 2025 analysis reinforced this, noting that proprietary financial datasets remain difficult to replicate, specifically because recreating decades of verified historical data with consistent identifiers is prohibitively expensive and technically challenging. Time and accumulated fidelity are the moat, not the data itself.

Synthetic data adds another wrinkle. In many contexts, synthetic data is not only valuable, it's also limitless. You can generate an infinite number of permutations for testing, training, and simulation. In my world (security), synthetic data can power attack and breach simulations at a scale that was previously impossible. But when an actual attacker gets in, synthetic data is useless. Real data is what you need for the post-mortem, for attribution, for understanding what actually happened.

The principle generalizes. Synthetic data lets you simulate the future. Real data is irreplaceable for understanding the past. One is for preparation. The other is for truth. Both matter, but they're not interchangeable, and executives who conflate them will misallocate resources.

As a16z concluded: as foundation model capabilities commoditize, the scarcity shifts from the model to the data. The question every leader should be asking: are we benchmarking the durability of our advantages against human competitors, or against AI-augmented ones? Because the timelines are very different.

3. The Deployment Reality Gap: Prototypes Are Cheap, Production Is Not (Yet)

Building a working AI prototype has become shockingly accessible. I'm not a developer by training, but in three months I've built functional web applications (ThreatBench), threat intelligence tools (SavvyPOC), and interactive dashboards. The barriers to creating something that works on your screen have effectively collapsed.

But here's what nobody tells you in the hype cycle: the last mile of deployment still contains real complexity. Taking code that works locally and turning it into a production application (with authentication, payment processing, scalability, monitoring, and all the other things a real product requires) still demands knowledge and time. The gap between "it works on my machine" and "it works for 10,000 users" is meaningful.

Harvard Business Review recently identified seven structural frictions that prevent AI from crossing this last mile, from proliferating pilots to architectural complexity. A March 2026 survey of 650 enterprise technology leaders found that while 78% have at least one AI pilot running, only 14% have successfully scaled an agent to organization-wide production use. The gap between demo and deployment isn't anecdotal. It's measurable.

This matters for executives because it changes how you should evaluate AI projects. When someone shows you a prototype built in a weekend, that's genuinely impressive. But the follow-up question should always be: what does production deployment look like? That's where the real cost and timeline live.

The good news is this gap is closing fast. The tooling for full-stack deployment is improving at a pace that suggests most of the current friction will dissolve within a year. We're heading toward a world where the entire stack (code, infrastructure, deployment, scaling) can be generated and managed by AI. When that happens, the last significant technical barrier between an idea and a live product disappears.

We're not there yet. But we're close enough that planning for it is no longer speculative.

4. The Skill Evolution Gap: What "Senior" Means Is Changing

I've had several conversations with senior developers over the past few months, and two patterns keep emerging.

First, they're building more. AI dramatically accelerates development velocity, enabling experienced engineers to take on more projects, explore more architectures, and iterate faster. That's the optimistic story, and it's real.

Second (and this is the more interesting one), the best developers deliberately set aside a portion of their work (let’s call it 5%) to solve problems without AI assistance. Not because they're nostalgic. Because they are still intellectually curious and recognize that if you copilot everything, your own skills atrophy. And atrophied skills make you a worse copilot.

This concern is supported by emerging research. An analysis of GitHub Copilot's impact on 15 million developers found that 67% use AI coding tools 5 or more days a week, and many struggle to work when the tools are unavailable. The same analysis noted that junior developers who start with AI may never develop fundamental skills. GitClear's research, examining over 153 million changed lines of code, found that AI-assisted development is linked to a significant increase in code duplication and a troubling decrease in code reuse. More code is being written, but not necessarily better code.

Think about what that implies. AI equalizes build speed. A junior developer with good AI tools can produce code at a pace that would have been senior-level output two years ago. So if speed is no longer the differentiator, what makes a senior engineer valuable?

Judgment. Knowing what to build in the first place. Spotting when the AI-generated code is subtly wrong. Architecting systems that hold up under real-world conditions. Understanding the second and third-order consequences of technical decisions.

For executives, this reframes your entire talent strategy. The developers you want to retain and recruit are the ones with sharp judgment, not just fast output. And you need to create space for your teams to maintain the foundational skills that enable that judgment. The 5% rule isn't a quaint habit. It's a deliberate practice to stay relevant.

So What? Now What?

Four gaps. Comprehension, moat durability, deployment reality, skill evolution. None of them are theoretical. All of them are things I misunderstood or underestimated before I started building.

If you're an executive reading this, the single best thing you can do is start building. Not next quarter. Not after the strategy offsite. Now. Get an AI coach, block two hours a week, and build something small. You'll learn more in a month of building than in a year of briefings.

AI (specifically agentic workflows) is creating a change velocity you cannot abstract away or delegate. You have to feel it to lead through it.

How Recorded Future Helps

Two capabilities speak directly to the gaps described: