A traditional web service runs for days or weeks. The cost of spinning up its container gets paid once and amortized across millions of requests. Nobody loses sleep over container startup time when the container lives for a month.
Agent workloads don't get that luxury. An agent checking a price, running a calculation, or querying an API on behalf of a user generates a discrete execution context for each action. For security, each context needs its own sandbox. The code runs, returns a result, gets thrown away. Then the next action needs a fresh sandbox. Cloudflare's back-of-envelope math: over 100 million US knowledge workers, each with an agentic assistant at roughly 15% concurrency, means approximately 24 million simultaneous sessions, each generating streams of these tiny, isolated tasks. Several agents per person. The rest of the world's billion-plus knowledge workers behind that.
When each of those tasks needs its own sandbox, startup cost is no longer a rounding error. A container takes hundreds of milliseconds to start and hundreds of megabytes to hold. Multiply that across millions of concurrent sessions, each producing a dozen tool calls, and you're burning thousands of CPU-hours on nothing but spinning up and tearing down execution environments. The work inside each sandbox might take a few milliseconds. The envelope costs more than the letter.
A V8 isolate takes a different approach. It's an execution context within a shared process, using the same JavaScript engine that powers Chrome. Thousands of isolates share a single process and a single runtime instance, so the marginal cost of adding one more is small.
| V8 Isolate | Container | |
|---|---|---|
| Startup time | Few milliseconds | Hundreds of milliseconds |
| Memory per instance | Few megabytes | Hundreds of megabytes |
| Concurrency limits | None | Per-cluster (e.g., ~300/sec) |
Cloudflare states those figures represent roughly 100x faster startup and 10–100x better memory efficiency. At agent scale, that gap is your cost structure.
The efficiency comes from sharing. And sharing is where the trade-off gets interesting. Isolates within the same process don't get OS-level separation. Cloudflare knows this. They run multiple defensive layers beneath the isolate boundary, including process-level containment and hardware memory protection. But the boundary is still lighter than a container's, and lighter means it can't do everything. No filesystem. No arbitrary binaries. Your agent needs to write JavaScript. If it needs to clone a repo, install packages, and run a test suite, it needs a heavier environment.
Google's GKE Agent Sandbox occupies that heavier end. It uses gVisor's user-space kernel to intercept system calls and provide full OS-level isolation, launching 300 sandboxes per second per cluster at sub-second latency. Fast for containers. A different order of magnitude than isolates. But when the workload genuinely needs a filesystem and arbitrary binaries, the heavier overhead pays for capabilities the workload actually uses.
Both approaches have a workload shape where their overhead fades into the background. Containers amortize beautifully over long-lived services. Isolates amortize beautifully over billions of ephemeral micro-tasks. Agent infrastructure is pulling compute hard toward the ephemeral end.
Per-unit costs get locked in at the sandboxing layer, before a single line of agent-generated code runs.
Cloudflare makes this argument explicitly: running each agent in its own container is expensive enough that agentic tools today are mostly limited to users who can justify the cost. For agents to move beyond early adopters, they have to be affordable. The sandbox is where that gets decided.
Things to follow up on...
- AWS AgentCore Browser expands: Amazon Bedrock AgentCore Browser now supports OS-level interaction capabilities beyond Chrome DevTools Protocol, addressing automation scenarios where browser-only sandboxing isn't enough.
- Google's agent identity layer: The Gemini Enterprise Agent Platform assigns every agent a unique cryptographic ID with defined authorization policies, creating an auditable trail that pairs with sandbox isolation as a governance surface.
- MCP's unsafe-by-default problem: OX Security found that Anthropic's MCP SDKs ship with unsafe defaults in the STDIO transport interface, producing 10 critical CVEs across popular frameworks and raising questions about what "expected behavior" means when your protocol is a supply chain.
- The architectural divide hardens: InfoQ's assessment of Cloudflare's bet notes an emerging split between long-lived agent environments and ephemeral execution layers, with the open question of whether these converge or become distinct market segments.

