Sometime in the early 1970s, at Bell Labs, someone decided how the C programming language would store text. There's no documented record of the conversation. No memo. No design rationale. The choice was small: a string would be a sequence of bytes ending with a zero byte (called NUL), rather than a sequence that starts by declaring its own length. A length-prefix tells the program how far to read upfront. A null terminator means the program reads forward until it hits a zero, discovering the boundary as it goes. Most languages at the time used the length-prefix approach. C went the other way, probably because it matched conventions from assembly programming on the PDP-11 hardware the team was targeting.
Memory was expensive. The PDP-11 was the machine that mattered. Nobody was designing for a future where C would become the substrate of virtually all operating systems, networking stacks, and embedded devices on the planet.
The problem with null-terminated strings is that the program has to trust the data. A function reads bytes until it finds a zero. If the input is longer than the buffer allocated to hold it, the extra bytes spill into adjacent memory. An attacker who controls that input can overwrite the return address on the stack and redirect execution to code of their choosing. This is a buffer overflow, and the representation choice made it structurally possible.
On November 2, 1988, Robert Tappan Morris's worm exploited exactly this in the Unix finger service. A carefully constructed 536-byte string overflowed a stack buffer. Within hours, roughly 6,000 computers were affected, about 10% of the internet at the time.
Then came thirty-five years of increasingly sophisticated defenses. Stack canaries in 1997 placed sentinel values below return addresses to detect overwrites. Address space layout randomization made memory locations unpredictable. Data execution prevention marked memory regions as non-executable. Every defense was clever, and every one was bypassed.
The numbers kept climbing anyway. Memory overflow CVEs more than doubled between 2013 and 2022. Microsoft disclosed in 2019 that 70% of its security vulnerabilities from 2006 to 2018 were memory safety issues. Twelve years of data, one pattern. Google independently reported the same 70% figure for Chrome's severe security bugs.
All those defenses were applied to the same underlying material, which hadn't changed.
Then, gradually, it started to. In October 2022, Linus Torvalds merged Rust support into the Linux kernel. By December 2025, Rust in the kernel was no longer experimental, with over 600,000 lines of production Rust across drivers and subsystem bindings. In February 2024, the White House published "Back to the Building Blocks," calling for adoption of memory-safe languages. The NSA had issued similar guidance in 2022.
Torvalds adopted Rust because the case for preventing entire categories of memory corruption in new code became harder to argue against than the case for linguistic consistency. That practical calculation took fifty years to arrive.
Buffer overflows were understood by 1988, well-studied by the mid-1990s, and the subject of an entire defensive industry by the 2000s. Everyone knew, for decades. The fix required changing the material programs are written in, and that kind of change moves at the speed of ecosystems, which is far slower than the speed of understanding. Billions of lines of C and C++ still run the world's infrastructure. Rewriting them introduces new bugs at every seam between old code and new. Anyone who has maintained a production system built on decades of accumulated C knows what this looks like: you can see the vulnerability class clearly, you know the fix exists, and you also know that the thing running in front of you cannot be safely stopped and rebuilt. The inflection is real. The original byte is still load-bearing.

