A peer-reviewed study published last year in Scientific Reports ran more than 3,500 workers through professional tasks with and without generative AI. Output quality improved. Efficiency rose. Intrinsic motivation dropped 11%. Boredom increased 20%. The mechanism the researchers identified was specific: AI collaboration reduced workers' sense of being the primary agent of their own output. The cognitively demanding parts of the work, the parts that made it stimulating, were exactly what the AI handled.
This finding has been circulating in enterprise AI circles mostly as a caution about change management. Train people better. Give them time to adjust. The dissatisfaction-as-adoption-friction reading is understandable but incomplete. The gap between rising productivity and falling engagement carries information about a design problem that hasn't been addressed yet.
When a knowledge worker moves from writing an analysis to reviewing one that an AI drafted, the task description looks similar on a workflow diagram. The experience is different in kind. Producing something requires sustained attention, judgment applied continuously, the kind of engagement that generates both fatigue and satisfaction. Reviewing requires pattern-matching against quality standards, catching what's wrong, deciding what's good enough. It's supervisory work. Valuable, necessary, and it doesn't feel the same.
Desktop publishing hit the same nerve in the mid-1980s, collapsing an entire chain of specialized craft roles. The initial organizational response was to celebrate the productivity gains. What took longer to recognize was that design work had changed in character, not just speed, and the roles and quality processes built around the old work no longer fit. The redesign lagged the tools by years.
Something similar is playing out now. Research on enterprise AI deployments keeps surfacing the same finding: companies capturing real value have rebuilt processes from outcomes backward. And yet the vast majority of generative AI pilots fail to produce measurable financial impact. The primary cause traces to poor workflow integration and misaligned organizational incentives. Successful deployments share a common thread: high performers redesign workflows around the new capabilities. The work hadn't been redesigned. The AI had just been added.
The satisfaction gap sits right at this juncture. The old job, plus AI, minus the parts that were engaging, equals a role that performs well on output metrics and poorly on every human dimension that sustains quality over time. The work itself needs to be redrawn, and no training program substitutes for that.
The few enterprises reading this signal well have moved past the productivity question entirely. They're asking what the human's job actually is now, and how to make that job worth doing. In practice, this means being explicit about what work disappears, what remains, and what new work emerges. One BCG-documented deployment rebuilt an entire product innovation workflow around AI agents from scratch, cutting cycle time by 60% while giving humans a genuinely different role: setting direction, managing tradeoffs, refining what the system couldn't evaluate on its own. The human job became harder and more interesting.
That's the design frontier as agent capabilities mature. Orchestration work can be engaging, even more engaging than production work, but only when it's designed as a real job with real judgment, a job someone would recognize as theirs. Enterprises that treat the satisfaction gap as a soft metric, subordinate to the productivity line going up, will discover what the printing industry discovered. The redesign happens eventually. The only question is whether it happens by intention or by attrition.
Things to follow up on...
- Workers want more agency: A Stanford SALT Lab survey of 1,500 workers across 104 occupations found that workers generally prefer higher levels of human agency than current AI capabilities would assign them, a gap that could widen as automation advances.
- The cognitive atrophy evidence: A peer-reviewed model in Information describes human-AI interaction as a nonlinear process where reflective engagement enhances metacognitive skills while over-delegation to automation reduces analytical autonomy, complicating the long-term picture for orchestration roles.
- When AI collaboration feels good: The HBS "Cybernetic Teammate" field experiment at Procter & Gamble found that AI actually increased positive emotions and decreased negative ones during innovation tasks, suggesting the satisfaction story may depend on whether AI augments the cognitively demanding core or replaces it.
- Engagement is already falling: Gallup's 2026 State of the Global Workplace report found employee engagement at its lowest level since 2020, though only 12% of workers in AI-implemented organizations say the technology has meaningfully changed how work gets done.

