Under the WebMCP draft specification, a website that wants to be usable by an AI agent has to do something it has never had to do for humans. It has to say what it does. A site registers tools with a name, a natural language description of the function, and a typed schema defining inputs and outputs. A hotel booking site doesn't present a visual funnel optimized for conversion. It declares something like searchRooms(destination, dates, guests) and describes, in plain language, what that function returns.
The description field is where things get interesting. An agent calling a typed tool doesn't encounter a countdown timer. It doesn't see a page at all. It sees a function signature and a contract about what that function returns. The spec's own security considerations treat "misrepresentation — tools whose behavior doesn't match their description" as a named threat category, alongside prompt injection. The authors anticipated that sites might lie in their declarations. They built the expectation of functional honesty into the threat model itself.
A December 2024 study found that current web agents navigating the visual web remain susceptible to dark patterns, trusting false sale prices and choosing manipulative options even when they detect malicious intent. Scaling up model size didn't help. Agents fail on the visual surface for the same reason humans do: the surface was designed to produce that failure. Typed tool contracts bypass the surface entirely. The honesty is structural, built into the interface itself.
So a version of the web is emerging that has to be functionally honest because its interface offers no surface to manipulate. And that clean, structured version, placed alongside the human-facing web, produces something nobody planned for: a contrast document that makes the architecture of the existing web uncomfortably legible.
The SEC's Form 10-K offers a useful parallel. Before standardized financial disclosure, public companies communicated with investors primarily through glossy annual reports. Management had wide latitude over language, tone, and emphasis. They used that latitude predictably. The 10-K didn't replace the glossy report. Companies still produce them. What it did was create a structured document against which the narrative version could be read. The gap between the two became its own source of information. Researchers studying impression management in corporate disclosure found that narrative obscurity correlated with performance problems being concealed. The mandated disclosure provided data. It also revealed what the voluntary communication had been doing with data all along.
WebMCP suggests a similar dynamic. The structured version here answers to a different source of pressure: the user who sent the agent. An agent calling calculateFinalPrice() needs the actual total. There is no checkout page to bury fees in.
But only for the tools a site chooses to declare.
WebMCP is opt-in. Sites decide which functions to expose. A booking platform can register searchRooms without registering addResortFee. Declared tools must behave as described. What goes undeclared is the site's business. The machine-readable web will be a curated map, shaped by what a site chooses to declare. And the distance between what's declared for agents and what's presented to humans will be, like the distance between the 10-K and the glossy annual report, readable in its own right. A document made of omissions. Perhaps the most informative thing about a site's tool contract will turn out to be the functions that aren't in it.
Nobody set out to build a confession. The goal was a protocol. But describing what a website actually does, even selectively, produces an outline of what it chose not to say.
Things to follow up on...
- Dark patterns against agents: A peer-reviewed study accepted to IEEE S&P 2026 found that dark patterns manipulate web agents through the same visual mechanisms that work on humans, and that simply scaling model size offers no defense.
- The opt-in discovery gap: WebMCP currently has no manifest-based tool discovery, meaning agents can only find declared tools after navigating to a page, a limitation the W3C Community Group specification acknowledges will need to be addressed in future versions.
- Governance as the real bottleneck: Deloitte's 2026 enterprise AI report found that only one in five companies has a mature governance model for agentic systems, suggesting the trust infrastructure for agent-web interaction lags far behind the protocol work.
- MCP under open governance: Anthropic contributed the Model Context Protocol to the newly formed Agentic AI Foundation under the Linux Foundation, a move toward open standardization that will shape whether agent protocols develop as public infrastructure or proprietary layers.

