When Stanford's SALT Lab surveyed 1,500 workers across 104 occupations about AI automating their tasks, only 46.1% expressed positive attitudes. AI capability runs well ahead of that number. The gap is worth paying attention to.
Fear is in there, sure. But the more telling signal shows up when you look at which tasks people protect.
The strongest predictor of resistance is enjoyment — stronger than complexity, stronger than wage level. When workers report enjoying a task, their desire to see it automated drops sharply. Scheduling client appointments? Automate away. Rectifying errors in records? Please. But tasks involving creativity, interpersonal communication, or domain judgment? Workers hold on, even when the technology is ready. In Arts, Design, and Media, only 17.1% of tasks received positive automation ratings.
The SALT Lab calls this the Red Light Zone: tasks where AI can perform well but workers actively resist. What's striking is how much investment flows directly into these zones. The researchers mapped Y Combinator companies against their task taxonomy and found 41% of startup-task mappings concentrate in Red Light and Low Priority zones. Nearly half of current AI startup activity is aimed at tasks workers either don't want automated or don't consider worth automating.
On 47.5% of tasks, workers preferred more human involvement than AI experts deemed technically necessary. The dominant preference across 47 of 104 occupations was "equal partnership."
This gap between "can automate" and "want automated" is a map. It shows where meaning is embedded in work, where identity is tangled up with tasks, where the thing that makes a job feel like yours lives inside the doing of it. Enjoyment, creative expression, the back-and-forth with a client. The parts of work that people organize their professional selves around.
And that tells you something about where agent deployment gets genuinely hard next. The capability-matching framing asks: can the agent do this task? The preference data points somewhere else entirely. Organizations that treat deployment as a technical coverage problem will keep running into resistance that looks irrational from the outside. It is entirely rational. Workers are protecting something real. They just don't always have the vocabulary for it, because "this task is part of how I know who I am at work" doesn't fit neatly on a survey.
Take that preference data seriously as design input, and the sequence of deployment probably shifts. You'd start where workers are already asking for help and build trust before approaching the tasks where meaning lives. You'd design for the "equal partnership" that workers across nearly half of all occupations actually want, instead of optimizing for full automation and treating human involvement as a transitional cost. The 54% of tasks where workers say no, or not yet, or only with me still involved, stops looking like a problem to solve. Those boundaries track what work means to the people doing it.
---ARTICLE_DIVIDER---

