Nobody holds a meeting to ratify the org chart an AI pilot assumes. The pilot just runs. It hits its targets. Someone schedules the scale-up.
The successful pilots deserve the scrutiny we rarely give them. A pilot that works encodes the current workflow into infrastructure. The process it automated becomes the process. The reporting lines it assumed become the reporting lines. The decision rights it left untouched stay untouched. All of this happens without anyone noticing they've made a choice.
BCG's guidance on agentic AI puts it directly:
"A common mistake is automating what already exists. Real value comes from a 'zero-based' approach, starting with the outcome you want and reinventing how to deliver it."
Most organizations do the opposite. They take the current process, wire agents into it, and measure whether the agents perform the existing steps faster. When they do, the pilot is declared successful.
The current state hardens into the permanent state, and nobody remembers when the window for redesign closed. Once a pilot encodes the existing process, it creates trained personnel who know that system, integrations built to the current workflow topology, metrics calibrated to the current state. Each reinforces the others. Reversing becomes progressively harder, because the organization has built new habits around the encoded default. Path dependence operates whether or not anyone intended it.
RPA went through this cycle a decade ago. The failure rates get the attention, but the mechanism is what matters here. Bots were built around existing UI structures and legacy workflows. When applications updated, bots broke. Organizations that had encoded their current process into automation faced retrofit costs on top of maintenance, because the encoded process had become load-bearing. The automation worked well enough to become permanent before anyone questioned what it had preserved.
The Stanford Enterprise AI Playbook, drawn from 51 deployments across 41 organizations, found that 95% of AI transformation failures trace to organizational factors: workforce readiness, governance, executive ownership, sequencing. Seventy-seven percent of implementation challenges are organizational, not technical. The technology performs. The organization holds its shape. BCG's 10-20-70 principle captures the imbalance: 70% of outcomes are determined by people, processes, and cultural transformation. Most enterprise AI investment targets the other 30%.
Organizations that scale past pilot tend to treat AI as an operating-system change rather than a technology upgrade. They redesign workflows before encoding them. That requires acknowledging that current processes aren't the right ones to preserve. Most organizations skip that step. The principle is uncontroversial. The calendar is full.
Six months later, the pilot is in production. The workflow it encoded is now the workflow everyone works around. And when someone finally proposes the redesign that should have preceded the automation, they discover the thing they want to change has become infrastructure. The cost has quietly doubled.
The most consequential design decisions tend to be the ones nobody remembers making.

