There's a scenario that shows up repeatedly in robotic process automation forums, always with the same texture. Monday morning, operations teams arrive to find dozens of broken bots. A weekend software update changed a single dropdown menu. An entire purchase order workflow is down. The IT director cancels strategic planning to firefight.
They had a name for it: the RPA Death Spiral. Line-of-business owners create bots without IT involvement. The bots work until something changes. Interfaces get redesigned, data formats evolve, connected systems upgrade. Each break requires technical diagnosis. Each fix holds until the next change. Ernst & Young found that 30 to 50 percent of RPA projects fail outright. Research shows over half cannot scale beyond ten bots.
The failure modes were specific and well-cataloged. UI fragility was the most consistent: a button moves, an identifier changes, and the bot either stops or returns bad data. One documented case: a retailer bundled over fifty individual actions into a single bot, and when upstream systems changed, the entire chain collapsed. Technical complexity outrunning the automation's ability to absorb it. A different failure: a firm opened the door for citizen developers to build bots freely. Thousands of overlapping, undocumented automations accumulated until nobody knew what existed, let alone what was broken. Organizational complexity, and it turned out to be harder to recover from. The technical failures at least produced error messages.
This documentation is not ancient. The bulk of it dates from 2018 to 2023. It's detailed, sitting in industry reports and CIO publications. Licensing turned out to represent only 25 to 30 percent of RPA's total cost of ownership. The rest was implementation, maintenance, and support.
Agentic AI genuinely extends reach beyond what RPA could manage. Agents interpret unstructured context, adapt to novel inputs, coordinate across tools. They also worsen predictability in a specific way. An RPA bot that breaks produces an error. An agent that drifts produces a confident wrong answer. The failure mode gets quieter. And quiet failure modes compound, because nobody firefights what nobody notices.
The market itself is getting quieter too. Of the thousands of vendors now claiming agentic capabilities, Gartner estimates roughly 130 are genuine. The rest are engaged in what the industry calls "agent washing": existing RPA tools and chatbots relabeled without meaningful new capability. The very tools that generated the scar tissue, repackaged under new branding.
Over 40% of agentic AI projects will be canceled by end of 2027, citing escalating costs, unclear business value, and inadequate risk controls.
The reasons Gartner cites read like a summary of RPA post-mortems. In researching this piece, I looked for a single major agentic AI framework whose documentation explicitly references those post-mortems. I couldn't find one. Not one that cites the decade of incident reports on UI fragility, credential sprawl, exception cascades, ownership ambiguity. All of it is being encountered as if for the first time.
The scar tissue is there. Nobody packed it for the trip.

