Practitioner's Corner
Fresh research documents the adversarial web at scale. The industry's response reveals a mismatch between where the threat lives and where the defenses point.

Practitioner's Corner
Fresh research documents the adversarial web at scale. The industry's response reveals a mismatch between where the threat lives and where the defenses point.

The Wrong Layer

A single-pixel font. An instruction buried in an HTML comment. CSS that hides a payload from every human eye but leaves it fully legible to any agent parsing the source. Google's latest scan of billions of crawled pages found these attacks growing 32% in three months. The manipulation happens before the model ever touches the input.
OpenAI and Anthropic have both publicly acknowledged that the model layer cannot fully contain this. Nearly all defense investment concentrates there anyway. The people building defenses know where the problem lives. Something else is pulling resources toward the wrong layer, and it's structural.
The Wrong Layer
A single-pixel font. An instruction buried in an HTML comment. CSS that hides a payload from every human eye but leaves it fully legible to any agent parsing the source. Google's latest scan of billions of crawled pages found these attacks growing 32% in three months. The manipulation happens before the model ever touches the input.
OpenAI and Anthropic have both publicly acknowledged that the model layer cannot fully contain this. Nearly all defense investment concentrates there anyway. The people building defenses know where the problem lives. Something else is pulling resources toward the wrong layer, and it's structural.

The Web Wasn't Built for This

Five days from prototype to Hacker News post. Eighty-one thousand GitHub stars and a $17M seed round followed. By every visible metric, Browser Use looks like a solved problem. Their own benchmark says otherwise: after purpose-built models and a compressed action vocabulary engineered against the live web, roughly one in six tasks still fails. That gap is the story. What happens when you try to build a negotiation layer between language models and a web that was never designed to be read by machines.

The Web Wasn't Built for This
Five days from prototype to Hacker News post. Eighty-one thousand GitHub stars and a $17M seed round followed. By every visible metric, Browser Use looks like a solved problem. Their own benchmark says otherwise: after purpose-built models and a compressed action vocabulary engineered against the live web, roughly one in six tasks still fails. That gap is the story. What happens when you try to build a negotiation layer between language models and a web that was never designed to be read by machines.

Structured Admission
Google's WebMCP proposal replaces the screenshot-parse-click loop with structured tool contracts. Sites declare capabilities through the navigator.modelContext API, complete with schemas, parameters, and browser-enforced permissions. Agents stop guessing which button means "Buy Now" by squinting at raw HTML.
The fix is real where sites cooperate. Where they don't, where pages serve poisoned data to detected bots or embed hidden instructions targeting agent context windows, the old architecture persists untouched. WebMCP solves the tractable version of the problem. The intractable version doesn't notice.
This Week's Evidence




Past Articles

Inside one of Meta's data pipelines, two configuration fields refer to the same operation. Use the wrong one and the cod...

OpenAI's function-calling documentation has a telling phrase. When strict mode is off, the model "tries its best." That ...

Meta recently pointed AI agents at a major internal codebase. Of 4,100 files across four repositories, roughly 5% had do...

An AI agent called Ash rejected fourteen consecutive prompt injection attempts over two weeks. Encoded commands, XML exp...

