Three weeks before Microsoft launched its governance platform, 45 enterprise technology providers made a different bet. The MACH Alliance—including Accenture, Deloitte, commercetools, Stripe, and Workato—announced they're building an "Agent Ecosystem" focused not on tracking agents, but on making them work together across vendor boundaries.
The architectural difference reveals competing assumptions about what agent proliferation actually requires. Microsoft's approach assumes agents need centralized governance to remain manageable. The MACH Alliance assumes agents need standardized communication to become useful. Both are responding to the same proliferation. They're building opposite infrastructure to handle it.
The Alliance is developing protocols—Model Context Protocol, Agent2Agent Protocol, Agentic Commerce Protocol—that let agents from different vendors collaborate in production. Not just within a single platform's control plane, but across the fragmented landscape enterprises actually operate in. Their goal is what they call the "Internet of Agents": hundreds of specialized agents coordinating workflows that span multiple platforms, multiple vendors, multiple authentication systems.
This shifts the scaling problem fundamentally. Instead of tracking every agent centrally, you make agents self-describing. Instead of monitoring behavior from above, you build observability into how agents communicate. Instead of governance as surveillance, you get governance as coordination.
When agents need to collaborate across vendor boundaries, though, they're not just exchanging data. They're passing authenticated sessions, handling errors across systems they don't control, coordinating workflows where failure in one component cascades unpredictably. The Alliance's protocols don't fully address this operational complexity.
Consider what happens when a MACH-compliant agent from Vendor A needs to pass authenticated session data to a MACH-compliant agent from Vendor B, but the target site uses device fingerprinting that breaks when the session transfers. The protocols define how agents should communicate. They don't define how agents handle the adversarial web's resistance to exactly this kind of coordination.
Or when an agent monitoring inventory across regional e-commerce sites needs to coordinate with an agent handling pricing updates, and both need to authenticate to sites that deploy bot detection specifically designed to catch this pattern. The interoperability layer can standardize how agents describe their capabilities. It can't standardize how sites resist being automated, or how authentication flows vary by region, or how rate limits trigger differently based on behavior patterns that span multiple agents.
Operating web agents at scale teaches you that distributed coordination sounds elegant until you hit production realities. When you're running agents that need to authenticate across different systems, handle data from various sources, and coordinate workflows that span multiple platforms, the protocol layer is the easy part. The infrastructure layer underneath—error handling that works across vendor boundaries, observability that traces failures through systems you don't control, session management that survives handoffs between agents with different authentication contexts—that's where things get hard.
The MACH Alliance's bet is that composable architecture handles agent proliferation better than centralized governance. Their data suggests companies with composable systems experience 77% success rates deploying AI versus 36% for those without. The pattern they see: organizations that learned to make modular systems work together are better positioned for agent proliferation.
Distributed coordination requires acknowledging what happens when agents operate on the open web. Agents need to speak a common language, yes. But the web actively resists coordination. Sites deploy CAPTCHAs that break workflows. Authentication flows change without notice. Rate limits trigger based on patterns that span multiple agents. A/B tests show different structures to different agents, making coordination unpredictable.
When agents from five vendors collaborate and something breaks three steps in, how do you trace the failure? What does observability look like when execution paths cross systems you don't control?
When agents from different vendors need to collaborate, error handling crosses vendor boundaries in ways that make debugging nearly impossible. If an agent from Vendor A passes data to an agent from Vendor B, and something breaks three steps later in a workflow that spans five vendors, how do you trace the failure? What does observability look like when the execution path crosses systems you don't control? Who's responsible when the workflow fails because a site changed its structure, or authentication broke on a regional variant, or bot detection triggered on an interaction pattern that only emerges when multiple agents coordinate?
Running web agents at scale means facing these questions in production. Centralized governance assumes you can track everything from above. Distributed coordination assumes agents can work together through protocols. Both assumptions face the same challenge: the web is adversarial by design. Sites don't want to be automated. They change specifically to break automation. They deploy detection that adapts to patterns.
The MACH Alliance's protocols can standardize how agents communicate. They can't standardize how the web resists being automated, or how authentication complexity multiplies across vendor boundaries, or how failures cascade through distributed workflows in ways that make root cause analysis nearly impossible.
Both approaches—Microsoft's centralized governance and the MACH Alliance's distributed coordination—acknowledge agent proliferation isn't stopping. Microsoft projects 1.3 billion agents by 2028. The Alliance sees hundreds of specialized agents working together as inevitable. They're building different infrastructure for the same future. This tension reveals something about what's actually required.
Neither approach fully accounts for what operating agents on the adversarial web demands. Governance that doesn't understand web complexity can track agents perfectly while missing why they fail. Coordination protocols that don't account for how sites resist automation can enable agent collaboration while making failures impossible to debug.
We're building for infrastructure depth that handles both: observability that works across vendor boundaries, error handling that traces failures through distributed workflows, session management that survives the web's resistance to automation. Centralized tracking or distributed coordination—both need an infrastructure layer underneath that can handle what web agents actually encounter in production. Agent proliferation becomes reliable instead of just trackable when that layer exists.

