If you’ve read any of the recent AI-adoption reports – the ones with ominous charts, jargon-heavy executive summaries, and the unmistakable whiff of “we swear we didn’t panic writing this,” you’ve probably noticed a theme: enterprise AI pilots are failing at an almost comedic rate. MIT claims something like 95% of generative AI pilots never translate into meaningful business outcomes, and McKinsey notes that while nearly everyone has launched a pilot or two (or ten), only a small fraction ever manage to scale beyond the “bright new object we showed at last month’s steering committee” phase.
None of this surprises me anymore, not because AI is flawed or overhyped or secretly plotting its inevitable victory over humankind, but because after more than a decade of consulting across dozens upon dozens of organizations, I’ve learned that most institutions are astonishingly consistent in one particular way: they struggle mightily to absorb the benefits of the very technologies they claim to want.
And I don’t mean the small benefits or the edge-case improvements; I mean the obvious, measurable, plain-as-day gains that practically beg to be captured.
I’ve seen organizations automate entire processes only to watch those same organizations slowly, almost lovingly, rebuild the manual steps they had just eliminated. It’s like watching someone clean out a garage, marvel at the newfound space, and then promptly fill it with the very same boxes they had pulled out, because “we might need these old light fixtures someday.”
The Gains Are Real; the Organization Is… Less Real About Them
Over the years, I’ve lost count of how many times a team has celebrated shaving hundreds of hours off a process, only to quietly reintroduce “just one more check” or “a backup spreadsheet, for safety,” or some legacy side-process that made sense in 2007 but somehow refuses to die, like an office plant no one remembers watering but that somehow thrives on fluorescent light and despair.
The funny thing is, everyone involved is usually earnest, well-intentioned, and genuinely trying to improve their institution. There’s no malice here, no plot to undermine progress; it’s simply that the structure of the organization is too rigid to metabolize the very efficiency it claims to pursue. So the efficiencies leak away. Slowly, invisibly, with no dramatic explosion, and over time, everything returns to a slightly more modern version of how it used to be.
This is the part of the reports that almost no one talks about:
AI isn’t failing. The organizations are failing to absorb what AI enables.
And I say that with no smugness whatsoever, because the pattern is so universal that it almost feels evolutionary, the sort of deeply embedded institutional instinct that resists change not because change is bad, but because change requires movement. And movement, it turns out, is surprisingly hard for systems designed to stay exactly where they are.
Not Built for Possibility
One of the quiet truths you discover when you work with enough institutions is that most modern organizations were never truly built for adaptability; they were built for stability, predictability, and, if we’re being brutally honest, the comforting illusion that the future will politely resemble the past if you schedule enough meetings about it.
You see it everywhere:
- job descriptions that read like ancient tablets
- approval chains that stretch into eternity
- workflows fossilized under layers of “best practices” that were best around three administrations ago
- committees that meet quarterly to decide whether a decision should be made by a different committee
And when AI arrives, the organization reacts the only way it knows how: it tries to domesticate it, shape it into the familiar, force it to behave like the old tools it replaced, and in general, treat it like a cute but slightly unruly intern rather than a fundamental shift in how work can be done.
The predictable result is that nothing truly changes.
AI produces possibility; the organization produces structure; and possibility loses, not through violence, but through paperwork.
The Pattern Under the Pattern
At some point in my consulting career, after watching yet another digital transformation wilt under a mountain of legacy process maps and nervous middle management, I realized there was a deeper dynamic at play. Something not quite about technology, and not even really about culture, but about what organizations find tolerable.
Here’s my working hypothesis, stated plainly:
Most institutions are far more comfortable living with chronic deficiency than enduring the temporary inefficiency required to become adaptive.
A person who is mediocre in a role they’ve held forever? The organization can navigate that with its eyes closed.
Moving that person into a new role because automation freed them up? Now we’re flirting with chaos.
Redefining responsibilities across departments? Chaos.
Letting go of legacy processes that once made sense but now actively sabotage progress? Pure, unfiltered chaos.
And faced with the choice between stability tinged with obvious deficiency or the messy middle of reconfiguration, institutions pick stability every single time. It’s the devil they know, the devil they’ve budgeted for, and the devil whose email signature matches the one in the HR system.
But the irony is painful: the inefficiency they fear is the thing that would make them adaptable.
It’s the slack required for learning, the oxygen needed for experimentation, the breathing space that allows a structure to shift. Without it, the organization becomes a beautiful, frictionless machine that cannot turn.
AI Isn’t Exposing Weakness; It’s Exposing Design
When AI pilots stall, it’s not because the model is bad or the data is messy or the vendor oversold their demo (though let’s admit, that last one does happen with the kind of frequency usually reserved for meteor showers).
It’s because AI reveals the tension between what an organization says it wants and what it is structurally capable of doing.
AI creates room, a space to imagine, space to reassign, space to reconfigure, and the organization, almost immediately, fills that room with the exact structures that made adaptation impossible in the first place. But it’s not sabotage – it’s muscle memory, inertia, comfort level. Whatever you want to call it.
Which brings me back to the reports.
When 95% of AI pilots fail to generate value, the story is not that AI is overhyped. The story is that organizations are still built for a world where technology creates efficiency, and they have no idea what to do in a world where technology creates possibility.
What’s Possible Will Always Be More Interesting Than What Exists
This is the part that has kept me fascinated for more than a decade: not the failures, not the regressions, not the slow slide back into familiar behavior, but the gap between what institutions do today and what they could do if they were designed for adaptability instead of preservation.
It’s in that gap, the quiet space between reality and possibility, that the real future of organizational life lives, and it’s the space where I plan to spend the next several years: researching, questioning, designing, and eventually helping build institutions that not only use AI but integrate it into the very architecture of how they think, decide, and evolve.
Because as frustrating as these patterns are, the possibilities sitting just beyond them are extraordinary, and as I’ve said many times in many contexts:
What’s possible is fascinating.
And right now, possibility is lapping at the edges of our institutions faster than they know how to absorb it.
