In the fall of 1913, 140 workers lined up along a 150-foot chassis line at Ford's Highland Park plant. Within months, the assembly process had been divided into 45 separate operations. Ford's executives were thinking about throughput. As historian David Nye noted, "the term 'assembly line' was little used even in the technical press in 1913 and 1914." They hadn't named the thing. They certainly hadn't anticipated what it would do to the nature of work itself.
What they'd built was a set of structural consequences that would take years to become visible. Three of them are worth tracing forward, because multi-agent AI is producing all three simultaneously.
The first: for the first time in American industry, the people performing a task were no longer responsible for the quality of the task they performed. Someone else worried about that. These people became known as inspectors. By the 1920s, the Hawthorne Works factory employed over 5,000 of them out of a 40,000-person workforce. Walter Shewhart invented statistical process control at Bell Labs in 1924 specifically because the old mechanism for ensuring quality, the craftsman's own judgment, had been engineered out of the system.
The same split is already visible in multi-agent workflows. LangChain's survey of 1,300 professionals found 89% have implemented observability tools for their agents, a function that barely existed two years ago. Google's PaperVizAgent orchestrates five specialized agents; one of them, the "critic," does nothing but evaluate the others' output. Review became its own function for the same reason it did a century ago: decomposition made it structurally inevitable.
The second consequence arrived more quietly. The worker installing the left front wheel couldn't see the finished car. The feedback loop that told a craftsman something felt wrong simply ceased to exist. A randomized controlled trial by METR found that experienced open-source developers using AI tools were 19% slower on tasks in codebases where they averaged five years of experience. They perceived themselves as 20% faster.
That gap is worth sitting with. What lingers is the silence around it. The system gave those developers no reason to suspect anything had changed.
Under Taylorism, a third shortfall emerged alongside deskilling: output judged solely on rate. Units per hour. When no single worker could see the whole product, the question "who made this badly?" became unanswerable. Accountability persisted in name but had nowhere to land. The craftsman had stood behind work that was recognizably his. The assembler stood behind a number. Fortune's coverage of a recent Accenture and Wharton analysis captures where this logic arrives in agent systems:
"One human could suddenly find themselves responsible for an exponential cascade of outcomes they never saw coming."
When agents reason, execute, and coordinate across dozens of handoffs, accountability diffuses until the question of who owns an outcome has no clear answer.
Shewhart's statistical process control. Dedicated inspection departments. Decades of painful institutional learning about how to manage quality when the people doing the work can't see the work. Manufacturing did eventually develop the organizational infrastructure to handle what decomposition produced. It took forty years and a world war's worth of production failures to get there.
Multi-agent systems are decomposing knowledge work along the same structural logic. The consequences are arriving before anyone has named them, which is, if you look at Highland Park, exactly how it went the first time.
Things to follow up on...
- Quality as production killer: LangChain's State of Agent Engineering survey found that 32% of professionals cite quality as the top barrier to moving agents into production, outranking cost, latency, and tooling gaps.
- The calibration collapse: METR's February 2026 follow-up revealed that developers struggled to even report how long tasks took when using agentic tools, because they'd context-switch to other work while waiting for agents to finish.
- Accountability without architecture: Singapore's government AI governance framework identifies the core structural problem: agent actions emerge dynamically from interactions rather than fixed logic, diffusing accountability across multiple stakeholders in the agent lifecycle.
- The 327% surge: Databricks' 2026 State of AI Agents Report found that multi-agent architectures in enterprises grew 327% in less than four months, suggesting the decomposition logic is scaling faster than the organizational infrastructure to manage its consequences.

