f you've been on LinkedIn this week, your feed is probably full of a red and blue radar chart showing the gap between what AI is capable of and how it's actually being used at work. That chart, and two other articles published in the same week, caught my attention this fortnight. At first read, they tell very different stories. Taken together, they point to the same thing: the adoption gap is shaped less by the technology and more by the experience leaders have (or don't have) with it.

Anthropic measured the adoption gap, and it's large

If you missed it, Anthropic (the company behind Claude) published a research paper on Thursday that didn't just measure what AI could theoretically do, it measured what people are actually using it for at work.

The finding that matters most to CEOs today: AI is theoretically capable of performing most tasks in business, finance, legal, and management roles. Actual adoption is a fraction of that. In management roles alone, the gap runs at roughly 5:1 between what AI could do and what businesses are actually using it for. That means huge efficiency gains for businesses that sell "stuff", but deeper implications for professional services firms like law, financial advisory, and consulting, where expertise itself is the product.

What happens when one practitioner closes the gap

Zack Shapiro, a US attorney, published a piece describing how his two-person law firm competes with firms 100x its size by using Claude as his operating system. The post was viewed over seven million times, resonating well beyond the legal profession. The message wasn't the specific workflow - it was the underlying principle.

Shapiro encoded his professional judgment into reusable instructions that AI applies consistently: how he analyses contracts, what he flags, how he structures a response. A decade of expertise, made operational by AI.

On Anthropic's radar chart, Shapiro is likely operating at several times the industry average for observed AI coverage. The gap between him and most firms isn't about access to the technology, which is available to everyone. It's that Shapiro has invested the time to learn what AI can and can't do reliably for his specific work. That learning only comes through use. He's spent enough hours with the tool to know where it's strong (drafting, formatting, first-pass analysis) and where his own judgment still has to lead (strategy, risk assessment, client relationships). That combination of deep professional expertise and hands-on AI experience is what most businesses are missing. Not one or the other, but both.

His practice is small and much of his work is transactional, which makes this easier. But the principle holds at any scale and in any business where expertise drives value. A senior practitioner who has learned where AI is strong and where their own judgment still has to lead will operate differently from one who hasn't started. They'll move faster, spend less on work that doesn't require human judgment, and make better calls about where to invest their attention. That advantage compounds over time. The practitioners who have it didn't get it from a strategy document or a vendor demo, they got it through use.

The barriers are emotional and relational

Harvard Business Review published research based on interviews with 35 CEOs, CHROs, and senior leaders across professional services, financial services, and other sectors. They found three tensions that will sound familiar to anyone leading a business right now.

First, what one executive called 'leadership FOMO': boards demanding AI updates before the organisation has worked out the problem to solve. They've heard what AI is theoretically capable of, but without hands-on experience, that knowledge creates urgency without direction.

Second, experienced professionals protecting their identity. This isn't resistance to technology. It's anxiety about what adoption implies for expertise built over decades. It explains why Shapiro's story resonated so widely. He experienced AI as amplifying his judgment, not threatening it. For most practitioners in larger organisations, it doesn't feel that way yet.

Third, no shared definition of what AI "value" looks like. When leaders lack hands-on experience, the only value framework they can reach for is the one they already know: cost savings, headcount reduction, time-to-completion. These are operational metrics being applied to problems that don't work that way.

The most practical finding was that when senior leaders used AI visibly (not perfectly, but openly) it changed the conversation from "Is this allowed?" to "Could this help with the decisions I'm making?" What shifted wasn't technical fluency. It was permission through behaviour.

What this means

The emotional barriers the HBR study identified (identity threat, contested value, leadership FOMO) aren't separate from the knowledge gap. They're caused by it. A managing partner who has never used AI to review a contract can't evaluate whether her firm's AI strategy is any good. Not because she's not smart enough, but because she has no reference points. Without them, every AI conversation gets filtered through anxiety, abstraction, or someone else's agenda.

In my experience leading AI initiatives, and in conversations with leaders thinking through AI for their own businesses, the pattern is consistent. The leaders who have spent time working directly with AI make different decisions from those who've only been briefed. Not better-informed decisions, necessarily, but more grounded ones, because they have reference points that no briefing can provide.

The Anthropic data shows the resulting gap at scale. Shapiro shows what happens when one practitioner closes it.

Most businesses are stuck not because they're resistant, and not because the technology isn't ready. They're stuck because their leaders haven't done the thing that would make everything else tractable: used AI on a real problem, formed their own view, and led from that.

That's where the adoption gap starts to close. Not with a strategy, but with experience.