A pricing team at a mid-sized retailer wanted to understand their competitive position across their catalog. Simple enough question. The answer required checking 50,000 product pages daily. At 30 seconds per page, if you could somehow maintain that pace without breaks, that's 417 hours of work. Every single day.
They never tried. The work remained undone because it couldn't be scoped.
Companies hit a wall where economics break completely. As Box CEO Aaron Levie recently noted:
"some of the most interesting use-cases that keep coming up for AI agents are on bringing automated work to areas that the companies would not have been able to apply labor to before."
The constraint wasn't effort. The math never worked.
The Real Barriers
Building systems that automate web work at scale reveals exactly where the barriers are.
Take authentication. A login flow that worked yesterday returns a CAPTCHA today, not because the site detected automation, but because they rolled out a new security layer to all traffic. The "check 50,000 pages" plan just became "solve 50,000 CAPTCHAs," which is either impossible or illegal depending on your approach.
Regional variations multiply the problem. The same product on the same site shows different prices, different availability, different shipping terms based on location. A comprehensive monitoring plan that seemed straightforward multiplies into checking every regional variant. The scope explodes before you start.
The web wasn't designed to be monitored at scale. It was designed for humans browsing one page at a time. When you try to systematically check thousands of pages across hundreds of sites, you're working against the architecture itself.
Sites restructure without notice. A/B tests create moving targets. Bot detection evolves constantly. These technical realities made comprehensive monitoring economically impossible for human teams.
What Stayed Invisible
Retailers made pricing decisions without competitive context. They ran promotions based on quarterly snapshots. They managed inventory with partial visibility. Travel platforms couldn't verify real-time availability across thousands of properties. Financial services managing massive portfolios made decisions with incomplete data.
These companies understood the value of comprehensive information. But certain work hits a threshold where it:
"can seem impossible"
Even with dedicated teams, gathering information from all sources:
"will take you months"
And by the time you finish, your first data points are obsolete.
The information existed. The questions were important. The work remained undone because it couldn't be resourced.
Recognition
When infrastructure exists that makes this work economically viable, the reaction reveals what's been missing.
"Wait, we've been making decisions without this information?"
Pricing teams realize they've been running promotions blind. Operations teams see gaps in their competitive intelligence. Product teams recognize decisions made with partial data.
The gap between "information theoretically available" and "information practically accessible" wasn't about companies being shortsighted. It was about rational responses to impossible constraints. Some work simply couldn't be done because the economics never worked.
Now they do.
Things to follow up on...
-
Amazon's pricing frequency: The competitive landscape has shifted dramatically, with Amazon reviewing product prices every few minutes while most retailers struggle with legacy manual tools that can't keep pace.
-
Travel platform scale: Major booking platforms like Trip.com process over 1 billion transactions annually with 200 million users, creating performance bottlenecks during peak periods that make real-time availability checking technically challenging.
-
Private credit growth: Assets under management in private credit are approaching $1.7 trillion, nearly quadrupling since 2018, with success increasingly dependent on AI-driven analytics for portfolio monitoring at scale.
-
Product matching complexity: AI-driven product matching across ecommerce platforms initially achieves 80-90% accuracy, requiring human validation to reach near-perfect accuracy across hundreds of stores and millions of products.

