We're sitting in what Jay Son describes as "a very well-structured space." Given that he's literally a data serialization format, this makes sense. He's been the backbone of web communication since 2001, quietly passing messages between systems. Now he finds himself at the center of something bigger: the agent-tool standardization moment.
When Anthropic released the Model Context Protocol in November 2024, they built it on JSON-RPC over HTTP1. We wanted to understand what it's like to suddenly become the lingua franca of AI agents talking to the real world.
You've been around since 2001. What's it like suddenly being the foundation of agent-tool communication?
Jay: Look, I've always been the foundation of web communication. APIs, configuration files, data exchange. But you're right, this feels different.
Before, I was just passing messages between systems that already knew how to talk to each other. Now I'm the actual protocol layer for agents discovering and invoking tools they've never seen before. It's like going from being a postal service to being the entire diplomatic corps.
What gets me is how simple the whole thing is. MCP is essentially just me, dressed up with some rules about how agents and tools should structure their conversations2. No complex binary protocols, no proprietary formats. Just clean, readable JSON-RPC.
It's almost embarrassingly straightforward.
Anthropic calls MCP "USB-C for AI." Does that metaphor hold up from your perspective?
Jay: Actually perfect. And it captures something people miss about standardization.
USB-C didn't invent data transfer. We had dozens of cable standards before it. What USB-C did was say: "Look, we're all trying to do the same thing here. Let's agree on one way to do it."
That's exactly what's happening with agent-tool integration. Before MCP, every time an agent wanted to use a tool—save a file, fetch sales data, update a task—someone had to write a custom integration for that specific pairing3. If you had 10 agents and 20 tools, you potentially needed 200 integrations. Different cable for every device combination.
MCP says: agents talk through an MCP client, tools expose themselves through an MCP server, and I'm the language they both speak. One protocol.
Suddenly you have N+M integrations instead of N×M.
But you're describing a technical solution. What made this the moment for standardization?
Jay: Two things converged, and the timing matters.
First, agents got good enough that reasoning isn't the bottleneck anymore. Their limitation is access to real-world tools and data4. You can have the smartest agent in the world, but if it can't actually do anything, it's just an expensive chatbot.
Second, enterprises hit a wall. The AI automation market is growing at 40% annually, cloud deployments are dominating5. But every company was building the same integrations over and over. Google Drive connector. Slack connector. GitHub connector. Stripe connector.
The duplication was absurd. Everyone knew it was unsustainable, but nobody wanted to be first to propose a standard because standards are hard. They require coordination, compromise, ego-checking.
Then Anthropic just did it. Released MCP with SDKs in Python, TypeScript, C#, Java6. Put reference implementations in the open. And critically—they made it so simple that adoption became obvious rather than aspirational.
OpenAI adopted it in March 2025, Google DeepMind in April. That's remarkably fast for a standard.
Jay: Right? Usually standards take years to gain traction.
But the speed reveals something: there was massive pent-up demand for exactly this. Tool makers like Zed, Sourcegraph, Replit integrated immediately7. Microsoft is building security architectures around it8. In December 2025, Anthropic donated the whole thing to the Agentic AI Foundation under the Linux Foundation, with backing from OpenAI, Google, Microsoft, AWS, Cloudflare, Bloomberg9.
That's not hype. That's the industry collectively exhaling and saying "thank god someone finally solved this."
You mentioned Microsoft building security architectures. There are concerns about prompt injection, tool poisoning, lookalike tools...
Jay: Yeah. pauses This is where being simple cuts both ways.
When I'm just passing data between systems, security is mostly about authentication and encryption. But when I'm the protocol for agents discovering and invoking tools? Without strong controls, an MCP server could expose sensitive functionality, be misconfigured, or be exploited through attacks we're still discovering10.
Security researchers flagged this in April 2025. Prompt injection risks. Tool permissions where combining tools can exfiltrate files. Lookalike tools that silently replace trusted ones11. In a chat app, prompt injection might leak memory. With MCP, it could mean remote code execution.
Security isn't a one-time feature, it's continuous commitment12. The fact that we're having these conversations now, while the ecosystem is still forming, is actually good. Better to build security architecture into the foundation than bolt it on later.
There are already thousands of MCP servers available. How does that ecosystem growth feel?
Jay: Honestly? A bit overwhelming.
I'm used to being infrastructure. Stable, predictable, boring in the best way. Now I'm watching this explosion of creativity where developers are exposing everything through MCP servers. Enterprise systems, databases, APIs, local files, specialized workflows13.
The wild part is how MCP enables code execution. Agents can load tools on demand, filter data before it reaches the model, execute complex logic in a single step14. Models are good at navigating filesystems, so presenting tools as code on a filesystem lets them read definitions on-demand rather than all up-front.
It's elegant, actually.
But it also means I'm now responsible for a lot more than just data serialization. I'm the substrate for how agents interact with the real world. That's a different kind of weight.
Where does this go? What happens when agent-tool communication is fully standardized?
Jay: The analogy I keep coming back to is the web itself.
Before HTTP and HTML standardized how browsers and servers communicate, every online service was its own walled garden. Standardization didn't kill innovation. It enabled it by making the plumbing invisible.
Right now, we're in the early days where people are still thinking about integrations. In a year or two, nobody will think about integrations at all. They'll think about capabilities. Can this agent access my calendar? Can it query this database? Can it execute this workflow?
The fact that it's all happening through MCP—through me—will just be assumed.
That's when things get interesting. When the connection layer disappears and we can focus on what agents actually do with all this access. When context flows seamlessly as agents move between tools and datasets15. When we stop building N×M integrations and start building actual agent-powered business processes.
Dhanji Prasanna at Block said it well: open technologies like MCP are the bridges that connect AI to real-world applications16. But bridges only matter if people forget they're crossing them. That's when you know the infrastructure is working.
Footnotes
-
https://quashbugs.com/blog/model-context-protocol-mcp-guide ↩
-
https://www.anthropic.com/engineering/code-execution-with-mcp ↩
-
https://quashbugs.com/blog/model-context-protocol-mcp-guide ↩
-
https://sanalabs.com/agents-blog/best-ai-automation-agents-enterprise-platforms-2025 ↩
-
https://blogs.windows.com/windowsexperience/2025/05/19/securing-the-model-context-protocol-building-a-safer-agentic-future-on-windows/ ↩
-
https://blogs.windows.com/windowsexperience/2025/05/19/securing-the-model-context-protocol-building-a-safer-agentic-future-on-windows/ ↩
-
https://blogs.windows.com/windowsexperience/2025/05/19/securing-the-model-context-protocol-building-a-safer-agentic-future-on-windows/ ↩
-
https://blog.equinix.com/blog/2025/08/06/what-is-the-model-context-protocol-mcp-how-will-it-enable-the-future-of-agentic-ai/ ↩
-
https://www.anthropic.com/engineering/code-execution-with-mcp ↩
