We're sitting in what Coordination describes as "the space between good intentions and actual interoperability." Given that we're discussing the newly formed Agentic AI Foundation, this feels appropriate. Coordination has been around since humans first tried to work together on anything, and has watched countless attempts to standardize emerging technologies. When the Linux Foundation announced the Agentic AI Foundation in late 20251, Coordination perked up with the weary interest of someone who's seen this movie before but keeps buying tickets anyway.
"I'm not a person, obviously," Coordination clarifies at the start. "I'm the fundamental challenge that every standards body faces. Think of me as the ghost in the room during every committee meeting where someone says 'we just need everyone to agree on...'"
The Linux Foundation just launched the Agentic AI Foundation. You've seen standards efforts before—W3C, IETF, all the protocol wars. What's your first reaction?
Coordination: Oh, I'm fascinated. And look, I want to be optimistic. I genuinely do.
The timing is interesting. They're forming this foundation while the market is exploding. Projected to hit $52 billion by 20302. Gartner says 40% of enterprise applications will embed AI agents by end of 20263. That's a lot of momentum.
But here's what keeps me up at night, metaphorically speaking. They're trying to be the W3C for agents, right? Create shared standards, best practices, open interoperable ecosystems. Noble goal. Except the W3C took years to standardize things, and the web was already built on TCP/IP and HTTP. Foundational protocols that were already settled.
These folks are trying to standardize while the foundation is still being poured.
You sound skeptical. But we also see real protocol work happening—Model Context Protocol got broad adoption throughout 2025, the W3C has an AI Agent Protocol Community Group that started meeting in mid-2025. Isn't that progress?
Coordination: It is! MCP is genuinely useful. It standardized how agents connect to tools and databases, turned custom integration work into plug-and-play4. That's a real win. And the W3C group has a clear mission: develop open protocols so agents can discover, connect, and collaborate5. These are the right problems to solve.
But here's where I get nervous. You've got MCP from Anthropic. IBM released their Agent Communication Protocol with RESTful interfaces and capability-based security6. There's the Open Agent Protocol built on LangGraph. Multiple groups saying "here's the standard."
See, my job gets harder when everyone agrees standards are important but can't agree on which standard.
Imagine if HTTP had three competing versions, all technically sound, all backed by major players. The web would have fractured. Right now, in late 2025, early 2026, enterprises are making architectural decisions that will stick for years. If the standards don't converge soon, you'll have incompatible agent ecosystems. And I'll have failed again.
What would success look like for the Agentic AI Foundation? What would make this different from past standards efforts that stalled?
Coordination: Let me think about this carefully because it's not just technical.
First, they need to move at market speed. The W3C could take years because the web wasn't going anywhere. It was already dominant. But AI agents? The market is moving fast. If the foundation takes three years to ratify a discovery protocol, enterprises will have already built proprietary solutions and won't want to rip them out. Success means being relevant while standards are still useful.
Second—and this is where it gets interesting—they need to embrace the messy middle. Not everything needs to be standardized. MCP showed that you can standardize the interface without standardizing the implementation. Let a thousand agent frameworks bloom, but make sure they can all talk to each other through common protocols. That's the sweet spot.
Third, they need buy-in from the people who are actually building production systems. Not just the big vendors, though you need them. But the practitioners. The folks at companies like yours, TinyFish, who are operating agent systems at scale and know where the pain points are.
Standards written by people who don't ship code tend to be academically interesting but operationally useless.
That last point resonates. We're seeing enterprises make decisions now—choosing frameworks, building integrations, architecting systems. What should they be thinking about while these standards efforts play out?
Coordination: Build for optionality.
You can't wait for perfect standards. The market won't wait. But you can make architectural choices that don't lock you in. Use abstraction layers. Don't tightly couple your agent logic to a specific protocol. When you integrate tools, think about how you'd swap the integration layer if a better standard emerges.
And honestly? Pay attention to what's getting actual adoption, not just what's getting announced.
MCP got broad adoption because it solved a real problem and was easy to implement. If the Agentic AI Foundation produces standards that practitioners actually use, that's your signal. If they produce beautiful specifications that sit in GitHub repos while everyone keeps using proprietary solutions... well, that tells you something too.
The enterprises that will succeed are the ones who stay loosely coupled to any single approach. Build agent systems that can evolve as standards mature.
Because I promise you—and I've been doing this a long time—the standards landscape in 2028 will look different than it does today.
Last question. Are you optimistic about the Agentic AI Foundation specifically?
Coordination: You know what? Cautiously, yes.
They're starting with the right problem. Coordination across a rapidly growing ecosystem. They have the Linux Foundation's infrastructure, which has successfully stewarded other open source efforts. And the timing might actually work in their favor. The market is big enough that everyone has an incentive to make interoperability happen, but young enough that architectural patterns aren't yet set in stone.
But I'm also realistic. I've watched a lot of standards efforts. Some succeed, some fail, most end up somewhere in the middle. Useful but not transformative. The difference usually comes down to whether they solve problems people are actually having, whether they move at the right speed, and whether they can get the major players to cooperate instead of compete.
Ask me again in 2027. By then, we'll know if this is the W3C for agents or just another foundation with good intentions and a GitHub organization.
Footnotes
-
https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/ ↩
-
https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/ ↩
-
https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/ ↩
-
https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/ ↩
-
https://agent-network-protocol.com/blogs/posts/w3c-agent-protocol-progress-202506.html ↩
-
https://www.ssonetwork.com/intelligent-automation/columns/ai-agent-protocols-10-modern-standards-shaping-the-agentic-era ↩
