Finance teams are approving automation projects they rejected 18 months ago. Same use cases, same workflows, same operational requirements. The technology pitch didn't change. The business case structure didn't change. But the economic equation flipped hard enough that CFOs stopped asking "can we justify this?" and started asking "can we afford not to?"
We're seeing it in production deployments at TinyFish: enterprises that shelved web automation proposals in early 2023 are now running those exact workflows at scale. The work didn't change. The threshold for what pays for itself moved.
What Changed the Math
Three things happened simultaneously. Infrastructure costs dropped 70% since 2020. DeepSeek's V3 model cut inference costs 36x compared to GPT-4o. But compute getting cheaper wasn't the real shift. Reliability infrastructure becoming affordable was.
Browser sessions were always cheap. The monitoring, error handling, and recovery systems needed to run thousands of concurrent sessions without constant human intervention—that's what cost real money.
When we built web agent infrastructure, a single unhandled authentication failure could cascade across hundreds of sessions. That operational reality meant "cheap" automation carried hidden maintenance costs that killed ROI.
Now reliability tooling that cost $2,500 monthly runs at $500. Observability that required dedicated engineering teams is platform-provided. Automation moved from "requires a maintenance team" to "runs predictably." That changes the economic equation fundamentally.
Second: competitive pressure made manual processes newly expensive. 43% of firms now dedicate majority AI budgets to agentic capabilities. When competitors automate pricing intelligence, manual monitoring becomes a strategic liability. The "do nothing" option acquired an opportunity cost that finance teams can quantify.
Third: reliability crossed the production threshold. Automation graduated from "requires constant fixes" to "runs predictably." When reliability infrastructure handles error recovery automatically, the economic model shifts from operational expense to infrastructure that compounds.
The Marginal Work Goes Live
Over the next six months, we'll likely see a specific pattern: tasks that were "almost economically viable" 18 months ago start moving to production. Too expensive to justify but close enough to debate. Now they're crossing into deployment.
These aren't the highest-volume use cases that justified early investment. They're workflows that require 100-500 concurrent sessions reliably. Payment processing across regional banking systems. Compliance verification across supplier networks. Inventory monitoring across fragmented e-commerce platforms.
This changes how enterprises think about automation scope. Instead of "optimize for massive scale on a few workflows," the calculation becomes "reliable execution across hundreds of smaller workflows." Different infrastructure requirement entirely. You're not building for peak throughput on known patterns. You're building for consistent reliability across variable surfaces.
Operating web agent infrastructure at scale, we're seeing enterprises ask different questions now. Not "can your platform handle our authentication complexity?" but "what's your cost per thousand successful runs across 500 different sites?" That shift, from capability validation to unit economics, reveals how thoroughly the approval threshold moved.
This could amplify quickly. Cheaper infrastructure enables more experimentation, which drives competitive pressure, which justifies more investment. Or it could create new bottlenecks we're not seeing yet. Perhaps around governance as automation sprawls across more workflows. Or around the operational complexity of managing hundreds of smaller automations instead of a few large ones. The trajectory seems clear, but the pace and specific constraints remain open questions.
The work that suddenly pays for itself isn't new work. It's work that was always valuable but crossed the economic viability line when three forces converged. 41% of enterprises expect more than 50% of AI deployments to be autonomous within two years. That's a signal about resource allocation. When the approval threshold moves this decisively, the question becomes how quickly enterprises recognize the shift and what they do with newly viable automation scope.
Things to follow up on...
-
Budget allocation patterns: Enterprise AI spending grew 75% year-over-year while innovation budgets dropped from 25% to just 7% of total AI spend, revealing how AI graduated from experimental to core operational budgets.
-
Payback period compression: The median payback period for RPA projects now spans less than 12 months, with 59% of enterprises registering major cost reductions from automation deployments.
-
Infrastructure investment scale: OpenAI committed $38 billion over seven years for cloud infrastructure with immediate access to hundreds of thousands of Nvidia GPUs, signaling how frontier model providers are betting on compute capacity despite falling commodity AI costs.
-
Hidden cost dynamics: Agentic systems generate exponentially more tokens through complex reasoning chains and tool usage, where a single user query can trigger dozens of internal AI interactions that create unexpected scaling costs.

