Bell Retrograde doesn't keep an office in any conventional sense. When I arrive at the converted warehouse space in Oakland, I find them surrounded by three monitors displaying RFC documents, a whiteboard covered in protocol diagrams, and what appears to be a hand-drawn timeline spanning 1969 to present. A cat named Parser sleeps on a stack of printed IETF standards.
"I'm trying to map every backwards compatibility constraint in SMTP," Bell explains, gesturing at the whiteboard. "It's like archaeological stratigraphy, except instead of pottery shards, you're looking at why your email client still speaks a protocol from 1981."
They've spent the last six years reading through the Internet Engineering Task Force's archive. Over 9,900 RFCs and counting. Tracing how design decisions made decades ago still shape every packet flowing through the internet today. It's obsessive work, the kind that most people would consider either heroic or slightly unhinged.
You've described SMTP as "the cockroach protocol." What do you mean?
Bell: [laughs] Cockroaches are survivors. SMTP was published in 1981, and it's still routing basically all email on the planet. RFC 3117 calls it "close to being the perfect application protocol," and here's the thing: it was simple enough that an entry-level implementation could fit on one or two screens of code.1
That simplicity meant it spread. Fast. You could implement it on a PDP-11, you could implement it on a mainframe, you could implement it on whatever weird hardware your university happened to have. And once it spread, it became impossible to replace because everyone had already built on top of it.
The installed base became its own moat.
Compare that to protocols designed "the right way." Beautiful specifications, elegant solutions to complex problems, and... nobody implements them. Too complicated. Too much coordination required. Doesn't work with existing infrastructure. They die in committee, or they die in the market, or they just sort of fade away while everyone keeps using the "worse" thing that actually shipped.
Is this what Richard Gabriel meant by "worse is better"?
Bell: Gabriel's 1989 essay is basically the Rosetta Stone for understanding protocol survival. He was writing about why Lisp was losing to C, but the principle applies everywhere. He argued that systems designed with implementation simplicity as the highest priority spread "like a virus" because they're portable and easy to adopt.2
But here's what makes me crazy: Gabriel himself went back and forth on whether this was good. He wrote position papers arguing both sides. At one point he said "This advice is corrosive. It warps the minds of youth."3
And he's right! It is corrosive. But it's also... true? The internet runs on "good enough" because "good enough" actually ships.
So why does SMTP keep winning?
Bell: Because changing it would break everything.
RFC 3426 has this line about DNS that applies equally to SMTP: "A fundamental requirement is to not disturb the current use and operation" of the system.4 Once you have a billion users, backwards compatibility becomes an absolute constraint. You can extend SMTP—we've added encryption, authentication, internationalization—but you can't replace it.
The IPv6 transition shows what happens when you can't maintain backwards compatibility. RFC 8170 notes that when no backwards compatibility exists, you need either a "flag day" (everyone switches at once) or a long coexistence period. Flag days are impossible at internet scale, and coexistence means running two systems in parallel, possibly forever.5
IPv6 was specified in 1998. We're still waiting.
SMTP avoided this trap because it was designed—probably accidentally—to be extended without breaking existing implementations. Every addition to SMTP has to work with the 1981 version. That's a straitjacket, but it's also why the protocol is still alive.
You've said reading RFCs is like reading archaeological records. What patterns do you see?
Bell: The most depressing pattern is how often we make the same mistakes.
RFC 3117 observes that SMTP was published in 1981, and "there is very little about good application protocol design that you can't learn by reading the SMTP specification," but since then "a lot of application protocols have been designed for the Internet, but there hasn't been a lot of reuse going on."6
We keep reinventing wheels! Badly! Because the people designing new protocols haven't read the old ones, or they think their use case is special, or they're optimizing for the wrong things.
Here's another pattern: protocols that succeed make extension points explicit from day one. DNS, HTTP, SMTP—they all had ways to add features without breaking existing implementations. But you also need implementations that respect those extension points. RFC 6709 points out that it's not enough to specify extensibility correctly; implementations have to honor it, and that requires testing.7
How many protocols fail because someone's middlebox or proxy doesn't handle an extension correctly?
What's the oldest design decision you can point to that still constrains us today?
Bell: [pauses, thinking] RFC 704 from 1975. September 1975. It documented how the IMP/Host protocol would be updated. The document says "the old and new formats will be supported in parallel in the IMPs for the foreseeable future to allow gradual phaseover of host software."8
That's the template. That's the pattern that every successful protocol transition has followed since.
The alternative is what Gabriel called "the right thing"—you design the perfect system from first principles. But perfect systems require coordination, and coordination requires time, and by the time you're ready to deploy, the "worse" solution has already spread everywhere and conditioned users to accept less.
That sounds pretty bleak.
Bell: [grins] It's only bleak if you think technology should be rational.
The internet isn't a designed system. It's an archaeological site. Every layer preserves the constraints of the layer below it. SMTP works the way it does because of decisions made in 1981, which were shaped by decisions made in the 1970s about how networks should work, which were shaped by the hardware available at the time.
You want to know why your email address has an @ sign? Because Ray Tomlinson needed a character that wouldn't appear in usernames. That decision in 1971 is still with us. Every time you type your email address, you're living inside a 50-year-old constraint.
But here's the thing—that's not a bug. That's how complex systems survive.
The protocols that persist aren't the elegant ones. They're the ones that were simple enough to implement, flexible enough to extend, and lucky enough to spread before something better came along.
So we're stuck with worse-is-better forever?
Bell: We're stuck with the consequences of worse-is-better. But Gabriel had another insight: once the virus spreads, there's pressure to improve it. SMTP today has encryption, spam filtering, DKIM signatures—none of that existed in 1981. The protocol evolved, but it evolved within the constraints of backwards compatibility.
The lesson isn't "ship garbage and hope for the best." The lesson is ship something that works, make sure it can be extended, and then improve it over time. The protocols that die are the ones that either shipped too late or couldn't adapt without breaking.
[Bell glances at the whiteboard, then back at me]
You know what's funny? We're having this conversation over email. Probably SMTP-routed email. The protocol we're discussing is literally carrying the discussion about itself.
That's the power of "good enough"—it becomes infrastructure, and infrastructure becomes invisible, and invisible things are nearly impossible to replace.
