Cloud infrastructure has followed the same sequence for nearly two decades. Provision compute, then secure it. Launch an instance, configure a security group. The shared responsibility model codified this order: the provider handles the physical layer, you handle everything running on top. Security was always the second step, layered around compute that existed first.
The logic held because the code running on that compute was written by humans with known intent. You could reason about what it would do before it did it. The security question was narrow: how do you prevent unauthorized access to a system whose behavior you already understand?
Agent infrastructure breaks that assumption at the root. Model-generated code is produced by a non-deterministic system. It might be correct. It might be subtly wrong. It might do something nobody anticipated. You genuinely don't know until it runs. When the code itself is uncertain, starting with an open environment and trying to constrain it afterward is working the problem from the wrong end.
Four organizations reached this conclusion in the same week. Google launched its Gemini Enterprise Agent Platform with secure sandboxes explicitly designed for "model-generated code" and a "secure-by-design architecture" for agent fleets. AWS shipped AgentCore with session isolation and natural-language policy boundaries. CIS published its MCP Companion Guide noting that agent-driven tool execution "differ[s] significantly from traditional integration models." And Cloudflare's Dynamic Workers articulated the design principle most directly: instead of starting with a general-purpose machine and constraining it, you start with almost no ambient authority and grant capabilities explicitly, resource by resource. No network access by default. No file system. Nothing until you say so.
"What exactly do we want this thing to be able to do?" That's the design question now. Everything follows from there.
Four different organizations. Four different business models. The same architectural primitive.
The MCP remote code execution story shows what happens when that primitive is absent. OX Security found that MCP's standard transport defaults exposed roughly 200,000 servers to arbitrary code execution. Anthropic's response was that this was "expected behavior." In a narrow sense, they were right: the protocol was designed with classical trust assumptions, where code runs in a trusted environment authored by known actors. But that response is also a claim about where responsibility sits. The protocol layer is saying containment is the deployer's problem, not ours. That is precisely the assumption the four announcements reject. When code is generated at runtime by a model you don't fully control, the open-unless-restricted default is where the exposure lives. No amount of deployer-side configuration fixes a starting posture that assumes trust.
The convergence here reflects something the industry is learning from actual deployment: for agent workloads, the relationship between compute and security has reversed. You start with an empty room and decide, deliberately, what to bring in.
A security improvement, yes. But the reorganization runs deeper than the threat model. Once authorization is the starting point, everything downstream shifts around it. Governance, cost structure, the boundary between platform and deployer. The empty room changes the building.
Things to follow up on...
- Google's protocol-aware gateway: Agent Gateway now enforces security policies across MCP and A2A interactions, though authorization based on protocol attributes currently only works for MCP.
- MCP governance matures fast: The protocol now has 97 million monthly SDK downloads and a new lead maintainer, with the 2026 roadmap prioritizing exactly the enterprise gaps containment-first architecture exposes: audit trails, SSO-integrated auth, and gateway behavior.
- Flowise exploitation in the wild: Separately from the MCP SDK vulnerability, hackers are already exploiting a critical Flowise MCP node flaw across nearly 15,000 publicly exposed instances.
- AWS policy as containment: AgentCore's new Policy feature lets teams define agent boundaries in natural language, specifying which tools, APIs, and conditions each agent can access before it runs.

