Doug Clearfix is not a real person, though his problems were. He's a composite of the thousands of web developers who wrote browser-sniffing code at the turn of the millennium, and his surname is borrowed from a CSS technique they all used daily. We assembled him from documented developer experiences, forum posts, and the collective muscle memory of an era when making a three-column layout work across two browsers was a week's labor.
We spoke with him about the choices that seemed obvious at the time.
You were building e-commerce sites in 2000. What did "working correctly" actually mean?
Doug: It meant the client doesn't call you. That's it. The whole definition.
I was building a product catalog for a regional furniture chain. Sixty-some SKUs, three-column grid, nothing fancy by today's standards. But "three-column grid" in 2000 meant floats, and floats in Internet Explorer meant the double-margin bug. You'd set a left margin of five pixels and IE would render ten.1 Your columns overlap, your sofa thumbnails are sitting on top of your dining sets, and the client's calling you at nine in the morning asking why their website is broken.
So you wrote browser-sniffing code.
Doug: Of course I did. You check the User-Agent string for MSIE, and if it's there, you serve a stylesheet with the workarounds. A separate stylesheet. That was the elegant solution. Some people were putting IE-specific hacks directly in their CSS with the * html selector trick, conditional comments.2 I thought I was being clean about it.
Did it feel like a hack?
Doug: It felt like the job. IE had eighty-plus percent of the market.3 Netscape was maybe fifteen percent and falling fast. If your site broke in IE, you lost the client. If it broke in Netscape, you lost a rounding error. The math wasn't complicated.
And the bugs were real. The box model thing alone. IE calculated width differently from everyone else. You'd set a hundred-pixel-wide box with ten pixels of padding and a two-pixel border. Every other browser says that's a hundred twenty-four pixels. IE says a hundred.4 That's not a philosophical disagreement. That's your layout being twenty-four pixels off on every single element.
You mentioned the client never sees the bug. But did you think about who else was visiting?
Doug: Bots? Sure, vaguely. Googlebot, maybe Yahoo's Slurp crawler. They identified themselves. It was considered polite. But I wasn't thinking about them when I wrote the sniffing code. I was thinking about humans with browsers.
The idea that something would show up claiming to be Internet Explorer when it wasn't? I mean, why would it?
Well. That's exactly what happened next.
Doug: Yeah. I know where you're going with this.
When IE started claiming to be Mozilla—
Doug: Right, Mozilla/1.22 (compatible; MSIE 2.0). That happened before I was even in the industry.5 Microsoft had to do it because developers were checking for "Mozilla" and only serving frames to Netscape. IE supported frames fine but wasn't getting them. So Microsoft lied. And then every browser after that had to lie too, because the sniffing code was already out there, already checking for the wrong things.
And you were writing more of that sniffing code.
Doug: (long pause) I was. I didn't start the pattern. But every time I wrote if (navigator.userAgent.indexOf('MSIE')), I was reinforcing it. Teaching the ecosystem that the correct way to get served the right page was to claim the right identity.
I wasn't the only one. There were thousands of us, all independently, all for perfectly good reasons.
MDN's documentation on UA sniffing starts by telling you not to do it, then spends pages showing you how.6
Doug: That's the whole era in one sentence.
Let's talk about the present. In January 2026, Cloudflare published an investigation showing that when certain AI services get blocked by their declared crawler identity, they switch to a generic Chrome user-agent string to bypass the block.7
Doug: Same logic. Exactly the same logic. "I support the features, I should get the content, so I'll claim to be the thing that gets the content."
A separate investigation found that a single user prompt to one AI service triggered sixteen requests from twelve different IP addresses, every one disguised as a regular Chrome or Safari browser.8
Doug: Sixteen requests. For one prompt. And none of them identified themselves.
Look, I want to say that's different from what I was doing. I was routing around a float bug. These systems are routing around consent. But the mechanism? Claim the identity that gets you through the door. It works for the same reason mine worked: because the infrastructure was already trained to trust identity strings that nobody was obligated to make honest.
You helped build that infrastructure.
Doug: I helped normalize it. There's a difference, but I'm not sure it matters. Every browser today still sends a User-Agent string stuffed with the names of browsers it isn't.9 Chrome claims to be Safari claims to be Firefox claims to be Mozilla. Twenty-five years of accumulated lies, and it started because developers like me checked the string instead of checking the capability.
Some AI services are now trying cryptographic verification. Signed HTTP headers that prove identity independent of the User-Agent string.10
Doug: Good. That's what we should have done in 2000. Check what the browser can do, not what it says it is. But we didn't, because checking identity was easier, and it worked well enough, and the client wasn't calling anymore.
That was the whole definition of success, remember? The client doesn't call you.
And now?
Doug: Now everybody's calling.
Footnotes
-
The IE6 double-margin float bug doubled the specified margin on floated elements. See CSS-Tricks, "IE CSS Bugs That'll Get You Every Time". ↩
-
IE-specific CSS hacks like the
* htmlselector and conditional comments were widely documented workarounds. See SitePoint, "10 Fixes That Solve IE6 Problems". ↩ -
Internet Explorer exceeded 80% market share by 2000 and peaked above 90% by 2001. See Visual Capitalist, "Internet Browser Market Share (1996–2019)". ↩
-
IE6's box model calculated width to include padding and border, contrary to the W3C specification. See SitePoint, "10 Fixes That Solve IE6 Problems". ↩
-
IE's original User-Agent string declared itself
Mozilla/1.22 (compatible; MSIE 2.0; Windows 95)to receive content gated on the Mozilla identifier. See Niels Leenheer, "A brief history of the User-Agent string". ↩ -
MDN's documentation on UA sniffing opens with warnings against the practice before extensively documenting how to do it. See MDN, "Browser detection using the user agent". ↩
-
Cloudflare's investigation found undeclared crawlers using generic Chrome user-agent strings to bypass robots.txt directives. See Cloudflare Blog, January 2026. ↩
-
A single prompt to one AI service triggered 16 requests from 12 unique IPs, none identifying as an AI agent. See DataDome, "The Great Masquerade: How AI Agents Are Spoofing Their Way In". ↩
-
Chrome's current User-Agent string includes references to Safari, AppleWebKit, and Mozilla — browsers it is not. See Niels Leenheer, "A brief history of the User-Agent string". ↩
-
OpenAI's ChatGPT Agent includes HTTP Message Signatures (RFC 9421) for cryptographic identity verification, independent of the User-Agent string. See HUMAN Security, "The Ultimate List of Crawlers and Known Bots for 2026". ↩
