BCG surveyed 1,000 executives across 59 countries and found a ratio that keeps showing up: in successful AI deployments, 10% of resources go to algorithms, 20% to technology and data, and 70% to people and processes. When they extended the analysis to agentic AI specifically in December 2025, the conclusion held. The lion's share of effort goes into redesigning roles, managing change, training the workforce.
Seventy percent. On the people around the technology.
Easy to nod along with. Harder to act on. The 70% is unglamorous work that never makes a press release. An engineer at Snowflake spending 20–30 hours a week interacting with five AI agents, reviewing their output, correcting their drift, adjusting their boundaries as projects shift. Someone updating an agent's operating parameters because a business process changed last Tuesday and the agent doesn't know that yet. Maintaining an inventory of which agents exist, who owns them, what data they touch, and which version is running where.
The numbers reflect who's doing this work and who isn't. Deloitte found only 11% of organizations are actively using agentic systems in production, while 35% have no formal strategy at all. The top challenges enterprises report aren't technical mysteries: integration (46%), data quality (42%), change management (39%). Organizational problems wearing technical clothing. Worth pausing on that, because it describes the central confusion of the current moment. Organizations keep trying to solve a people problem by buying better technology, and the 70% keeps sitting there, unaddressed.
This work is starting to coalesce into something people are calling agent operations. It borrows from DevOps and MLOps but deals with something those fields didn't face: non-deterministic behavior, autonomous tool use, and context-dependent reasoning. An agent that worked perfectly last month might behave differently today because the underlying business process shifted, or a data source changed format, or the workflow it supports was reorganized. IBM Research invested in a dedicated AgentOps program with a stated goal of making agents observable and, more to the point:
"Accountable to those who govern them."
Accountable means someone has to govern them, continuously, as conditions change. Governing an agent is a judgment call about what the agent should be doing in the first place, and judgment calls require people who understand the work.
Which is why the organizations reaching production share a common trait: business leaders who understand the nuance of their workflows are leading the transformation, not just approving it. BCG was explicit about this. IT builds the infrastructure, but it cannot design the work on the business's behalf. Left to lead alone, IT tends to produce technically sound agents solving the wrong problems, or solving the right problems with boundaries that don't match how the work actually flows. The 70% is redesigning roles, rewriting standard operating procedures, training people to provide oversight, and doing all of it again when things change.
The maintenance phase doesn't have a launch date. It's just what comes after the demo, quietly, for as long as the system runs. The real risk is the gradual version: agents keep running, the business keeps evolving, and the gap between what the agent does and what the organization needs widens so slowly that nobody notices until the drift has compounded into something structural.

