Buried in PwC's January 2026 workforce paper, "No More Pyramids," there's a sentence that quietly complicates the entire argument. The paper proposes that one experienced software engineer can now orchestrate AI agents across an entire development cycle. Then it adds:
"Because they've already built software without AI agents' help."
The new generalist, in PwC's own telling, is a former specialist.
The paper's broader vision is appealing. Replace the traditional pyramid with an hourglass: a base of AI-literate early-career workers, a lean middle, and a broad top of experienced generalists orchestrating agents across entire workflows. PwC's 2026 AI Business Predictions calls this the "rise of the generalist." As agents absorb routine specialist execution, humans shift from narrow tasks to broad outcomes.
The problem is temporal.
Today's generalists can orchestrate because they spent years going deep. They know what good output looks like because they once produced it themselves, slowly, by hand. The hourglass works right now because the people at the top carry decades of accumulated specialist knowledge. They're the reservoir.
PwC sees this. The paper explicitly warns that companies reducing early-career hiring too aggressively "won't be developing the high-performing generalists and business leaders who could fill your upper ranks." Future generalists still need the apprenticeship conditions that the generalist model is designed to leave behind. But the structural incentive pulls toward the hourglass anyway, because the hourglass is cheaper to run today.
Deloitte's December 2025 agentic AI strategy report sharpens the picture. In Deloitte's framing, human roles are consolidating into two tracks: compliance and governance on one side, growth and innovation on the other. Someone overseeing compliance workflows and someone identifying new market opportunities are doing fundamentally different work, drawing on different instincts, trained by different experiences. Calling both "generalist" loses something important in the translation. And both tracks still depend on someone having once understood the domain deeply enough to recognize when the agents are wrong.
So the hourglass has a shelf life. The generation trained inside it will eventually be asked to orchestrate work they never learned to do themselves. They'll supervise agents writing code they couldn't write, auditing processes they never ran. Imagine reviewing an agent's compliance assessment when you've never run one yourself. You'd be scanning for red flags in a landscape you've only seen from altitude. Which signals matter and which are noise? You wouldn't know, because you never learned what normal looked like from the ground.
The generalist model consumes depth of expertise. It has no mechanism to produce more.
PwC's paper is an honest accounting of this tension, and probably the document that will be cited to justify the very outcome it warns against.

