Service discovery is the cleanest transfer of the five. MCP's Server Cards proposal uses a .well-known/mcp.json endpoint to expose server metadata: name, version, available tools, transport type, auth requirements. If you've built discovery around Consul or DNS-SD, the shape is immediately familiar. One wrinkle: server cards are explicitly advisory. Actual capabilities only confirm after connecting and completing initialization. In Consul, the registry entry is routing truth. Here, it's a hint. That matters for anyone building automated orchestration that trusts the card without verifying. But the engineering pattern ports.
Service registries sit in a similar place, just earlier. There's no standard registration protocol for MCP servers today. Server Cards address the read side; there's no push or heartbeat model for servers to register and deregister. The MCP roadmap frames the solution as federated, coordinating with a broader "industry AI-catalog effort." If you've run Eureka or etcd, you understand the destination. The road isn't built yet.
These two transfer because they're mostly web infrastructure wearing agent clothes. Discovery and metadata are solved problems being reapplied. And the variance between what transfers and what doesn't turns out to be revealing. It tracks something specific: how much actual agentness a given layer involves.
Where familiar patterns stop working
API gateways are where distributed systems intuition starts to mislead. A traditional gateway routes HTTP requests based on paths, headers, methods. An MCP gateway has to parse full JSON-RPC payloads to determine which operation is being invoked. Stateful MCP connections force sticky routing that pins traffic to specific servers. Then there's the confused deputy scenario, where an attacker exploits dynamic client registration to redirect authorization codes through a compromised proxy. The MCP roadmap categorizes gateway behavior as needing:
"Clear problem statements and directional proposals."
Still at the level of problem statements. Your gateway experience tells you what the answer should look like. The agent-specific properties of the problem (statefulness, bidirectional communication, RPC-level routing) mean the implementation requires new work that doesn't exist in your back catalog.
Message queues feel strained in a different way. Three active protocols now cover agent communication: MCP for model-to-tool calls, A2A (which absorbed IBM's ACP last September) for agent-to-agent coordination, and ANP for decentralized peer-to-peer scenarios still in research. A2A supports async patterns through webhooks and SSE streaming. But none implement persistent broker semantics. No consumer groups, no offset management, no message persistence independent of the communicating parties. Teams expecting Kafka-like decoupling will need to build that layer themselves.
Tracing across the reasoning boundary
Distributed tracing is where the transfer gap becomes structural. A2A builds in OTLP trace IDs and metrics by design. MCP does not. The roadmap names audit trails as an enterprise priority with no spec proposal attached. That split tracks their origins. A2A was designed later, with enterprise coordination as a first-class concern. MCP originated as a tool-calling protocol, and cross-boundary tracing wasn't the problem it was solving.
The gap persists for a specific reason. In microservice tracing, context propagates via HTTP headers between services that each emit spans. In an agent workflow, the trace needs to cross the LLM's reasoning boundary: what the user asked, what the model decided, which tool it selected, what arguments it passed, what came back. The first two steps happen inside the LLM host, entirely outside MCP's transport. There is no standard way to propagate trace context across that decision boundary. OpenTelemetry doesn't straightforwardly apply because the most consequential step in the chain is an inference, not an HTTP call.
How cleanly each distributed systems pattern transfers to agent infrastructure is a reasonable proxy for how much actual agentness that layer involves. Where the model's reasoning becomes load-bearing, the problems are genuinely different.
That line holds across all five mappings. Your distributed systems experience carries wherever agent infrastructure is essentially doing web infrastructure. Once the model's reasoning becomes load-bearing, you're in new territory.
Things to follow up on...
- MCP's 2026 roadmap: The March 5 update lays out four priority areas including transport scalability and enterprise readiness, with Server Cards targeting the June 2026 spec release.
- Gateway ecosystem moving ahead: While the MCP spec leaves gateway behavior undefined, vendors like IBM ContextForge are already shipping multi-protocol MCP gateways with auth, audit logging, and REST-to-MCP translation.
- A2A's growing footprint: Since absorbing ACP, the A2A protocol has expanded to over 150 supporting organizations and added gRPC support and signed security cards in its v0.3 release.
- NIST enters agent standards: The federal AI Agent Standards Initiative launched in February 2026 includes an RFI on agent security and a concept paper on AI agent identity and authorization from NIST's cybersecurity center.

