Decagon raised $131 million in June at a $1.5 billion valuation on roughly $10 million in ARR. A 150x revenue multiple. Five months later at Web Summit, Babak Hodjat—Cognizant's chief AI officer and a co-inventor of technology that led to Siri—told Fortune something revealing: "Their valuation is based on bigger is better, which is not necessarily the case."
That statement points to where value actually lands in production agent systems.
What Hodjat describes matches what we see running web agents at scale. Customers don't struggle because their models lack capability. They struggle because the infrastructure around those models doesn't handle production reality. Authentication flows break across regional variations. Site structures change and agents need to adapt. Retry logic needs to work without cascading failures. Model size doesn't solve these problems—infrastructure does, regardless of whether you're running GPT-4 or a smaller specialized model.
What's Actually Being Priced
| Company Type | Revenue Multiple | Assumption |
|---|---|---|
| Customer service AI agents (avg) | 127x | Massive scale, universal applicability |
| All top AI agents (avg) | 52x | Standard growth expectations |
| Decagon | 150x | Rapid replacement of human support teams |
Sierra demonstrates the trajectory investors expect. The company hit $20 million in annualized revenue at a $4.5 billion valuation in October 2024, then reached $100 million ARR at a $10 billion valuation by September 2025. That's the path Decagon needs to justify its pricing.
But Hodjat's statement suggests something different:
"There's a threshold at which point a large language model is able to follow instructions in a limited domain, and is able to use tools and actually communicate with other agents."
Beyond that threshold, additional model capacity doesn't necessarily improve outcomes.
Model capabilities are easy to demonstrate. Show an agent booking a flight or handling a complex customer query, and the potential feels obvious. Infrastructure that handles authentication across 50 regional variations when DOM structures shift? Invisible until it breaks. Investors price what they can see in demos. Customers pay for what works in production.
The Pattern in Actual Deployments
What's emerging from companies running agents in production tells a different story than the valuation multiples suggest:
- Mozilla runs AI features using smaller, local models
- Arm creates custom extensions on top of LLMs for specific use cases rather than deploying massive general-purpose models
- IBM Ventures invested in Not Diamond, a model router, because 85% of companies using AI use multiple models—choosing the right model for the right task matters more than using the biggest model for everything
For web agents specifically, this threshold effect becomes even more visible. Once a model can reliably interpret site structures and follow multi-step instructions, additional model capacity doesn't address the real challenges: handling CAPTCHAs, managing session state across thousands of concurrent runs, adapting when sites A/B test their layouts, maintaining cost predictability at scale. Infrastructure problems, not reasoning problems.
When Cognizant deployed a multi-agent system for its 350,000 employees, Hodjat tested a network with 10,000 agents. It was accurate and not too slow, but it wasn't cheap. The challenge wasn't model capability. It was coordination, cost, and governance at scale.
Where This Points
If Hodjat is right about capability thresholds, the competitive landscape shifts. Value won't flow to whoever deploys the biggest models. It'll flow to whoever builds infrastructure that makes specialized agents work reliably at scale. The companies that reach $100M+ ARR won't be the ones with the most impressive demos. They'll be the ones whose customers can depend on their agents in production, day after day, across thousands of workflows.
That's a different moat than what's currently priced into these 150x multiples. And a different kind of company than investors may be expecting.
Things to follow up on...
-
Capital efficiency metrics: Some AI agent startups like Mercor and Cursor are already achieving revenue per employee that surpasses Microsoft and Meta, rivaling even Nvidia's efficiency levels.
-
MIT's breakthrough technologies: MIT Technology Review included "small language models" in its 2025 breakthrough technologies list, noting that smaller models trained on focused datasets can now match or exceed larger models for specific tasks.
-
Market growth projections: The customer support automation market is projected to grow from $308 million in 2022 to $2.89 billion by 2032, with IDC predicting AI will handle 95% of customer interactions by 2025.
-
Foundation model economics: HSBC's analysis shows OpenAI has committed to $1.4 trillion in data center buildouts and will need an additional $207 billion in funding to survive even if it generates $200 billion in revenues by 2030.

