The composability wave is real—swap models, connect tools, build from best-of-breed components. Forty-five enterprise providers committed to interoperability in October. OpenAI adopted MCP in March, Google followed in April.
But here's what contradicts the narrative: the components that look most commoditizable often resist loose coupling. The pieces that seem proprietary turn out to be the ones you should compose.
When you're operating web agents that monitor competitor pricing across hundreds of sites or verify inventory at scale, you discover which architectural choices actually work. The market is sorting itself. Over the next six months, that sorting will reveal where composability works in web automation and where it doesn't.
The reasoning layer—deciding which prices matter, identifying patterns—composes easily. The execution layer—keeping browser sessions alive across adversarial sites—is where systems break.
What Operating at Scale Reveals
Consider a retail team monitoring pricing across 500 competitor sites. The AI reasoning part? Deciding which prices matter, identifying patterns. That's straightforward. Swap models, adjust prompts, compose from frameworks. This layer works exactly as the composability story promises.
But keeping those 500 browser sessions alive is where systems break. Sites change login flows without notice. Bot detection evolves specifically to break automation patterns. A/B tests make yesterday's navigation fail today. What looked like commodity browser infrastructure turns out to be the hard part.
Research shows 88% of enterprises need six months just to get a single agent solution operating. Most teams spend that time building custom orchestration, the layer that's commoditizing. Then they discover the real problem: reliable execution across sites that actively resist automation.
At TinyFish, we operate enterprise web agent infrastructure across thousands of sites. That operational reality shows which components can be loosely coupled versus which need tight integration. The execution-layer challenges that don't respond to plug-and-play approaches:
- Authentication handling that adapts to site changes
- Browser management that maintains session state across concurrent agents
- Observability that reveals why workflows failed, not just that they failed
These challenges require understanding how sites defend against automation. Knowledge that compounds through operational experience.
Where Value Concentrates
Model access commoditizes. Generic orchestration frameworks proliferate. Tool-calling patterns become interchangeable. The reasoning layer is where composability delivers on its promise.
But infrastructure that handles adversarial environments at scale? That's where defensibility forms. Not because it's proprietary by design, but because reliability at scale requires understanding how sites defend against automation. That knowledge compounds through operational experience.
When the MACH Alliance announced their Agent Ecosystem initiative, the emphasis was on interoperability and composable architecture. But the harder infrastructure questions remain unaddressed in the broader composability conversation. Standards for browser management, authentication handling, anti-bot navigation.
The Next Six Months
Model interoperability will dominate headlines. Agent marketplaces, reasoning frameworks. That matters for the composable layers.
But watch what happens more quietly: teams operating at scale will figure out which components can be safely modularized when the environment actively resists automation. How do you maintain reliability when sites change specifically to break your workflows? That's the harder question.
Teams that scale own their execution layer (or partner with infrastructure specialists), then compose everything else. Reasoning, orchestration, model selection: modular. Browser management, authentication handling, anti-bot navigation: tightly integrated.
The composability conversation will bifurcate. The generic story (swap any component, build from best-of-breed pieces) will apply to reasoning layers. But execution infrastructure in adversarial environments will follow different rules. Not because vendors want lock-in. Because reliability at scale requires operational depth that doesn't transfer through APIs.
Teams evaluating agent infrastructure need to understand which components are actually swappable in web automation. The generic-looking components often turn out to be the hardest to get right. The proprietary-feeling components are often the ones you should compose from existing tools.
The teams that understand this sorting will have infrastructure that scales. The ones that don't will spend 2026 rebuilding execution layers they thought they could outsource.
Things to follow up on...
-
MCP's November updates: The next version of the Model Context Protocol specification releases November 25th with improvements to async operations, scalability, and server discovery that address current enterprise deployment challenges.
-
Memory as infrastructure moat: While LLMs commoditize, Mem0's memory API processed 186 million calls in Q3 2025, growing 30% month-over-month as memory systems become a key differentiation layer.
-
Enterprise adoption velocity: Gartner predicts 40% of enterprise applications will include integrated task-specific agents by 2026, up from less than 5% today, marking one of the fastest enterprise technology adoption curves on record.
-
Open source production deployment: Eclipse LMOS is already powering Deutsche Telekom's customer-facing AI systems in one of Europe's largest enterprise agentic deployments, testing whether open platforms can compete with proprietary infrastructure at scale.

