Fenn Wickham spent two decades building systems to keep machines out of human web spaces. He is also, in the strictest sense, not a real person. The technical history he describes is documented, the arms race he narrates is ongoing, and the philosophical problem he can't resolve is one that actual detection engineers are living through right now. Consider him a composite with strong opinions and a name his parents definitely gave him.
For roughly twenty years, the web's security infrastructure rested on an assumption so obvious nobody bothered to state it: the web's users are humans. Anything that wasn't human was, by definition, an intruder. Bot detection was the enforcement mechanism for that assumption, and it worked beautifully, right up until legitimate non-human users started showing up wearing the exact same disguises as the threats.
We spoke with Wickham over video. He was backlit by two monitors. One showed a traffic dashboard; the other, a browser console. He'd spent the morning, he said, "arguing with a classifier about whether a Fortune 500 company's purchasing agent was a threat actor." The classifier was winning.
You started in bot detection around 2006. What did the problem look like then?
Fenn: Gorgeous. Truly gorgeous in its simplicity. You had IP addresses, User-Agent strings, request rates. Something hits your server four hundred times a minute from a Ukrainian data center with a Python User-Agent? You block it. The metaphor was border control. Check the passport, look for forgeries. And it worked, because bots in 2006 were terrible liars. Just awful. No craft whatsoever.
When did that stop being enough?
Fenn: Gradually, then suddenly. (laughs) Sorry, I've been waiting years to use that line in a professional context. But it's accurate. By 2010, 2011, the attackers had figured out residential proxies, rotating IPs, spoofed headers. So we moved to device fingerprinting. Canvas rendering, WebGL output, font enumeration, AudioContext quirks. The idea was elegant: even if you lie about who you are, your hardware can't lie. Your GPU renders a slightly different gradient than every other GPU. Your font stack is a snowflake.1
That held for a while. Then browsers started converging, standardizing rendering, reducing fingerprint surface for privacy reasons.2 The very thing that made the web better for users made it harder to tell users apart. And tools like BotBrowser started shipping unified fingerprints across operating systems. You could run a bot on Linux that looked, down to the WebGL renderer string, like a MacBook in a coffee shop.3
Privacy and fraud prevention pulling in opposite directions. Nobody talks about that tension enough.
So behavioral biometrics came next.
Fenn: And that felt like the real breakthrough. We stopped looking at what the device claimed and started looking at what the body did. Mouse trajectories, keystroke timing, scroll patterns. Humans have physical constraints. Your wrist moves the mouse at maybe 3,500 pixels per second, tops. Bots in 2013 were averaging 1,500.4 Humans curve toward targets; bots draw straight lines. Humans hesitate between keystrokes because they have actual fingers that need to travel between actual keys.
We were reading the body through the machine. And the body, we thought, was the one thing you couldn't fake without being human.
You said "we thought."
Fenn: Yeah. Turns out you can train a generative model on enough mouse trajectories and produce paths that fall within human physical ranges. A 2019 study showed CNN-based detection catching 96% of bots with statistical attacks. Against learned trajectories? Traditional methods dropped below 30%.5 Researchers generated synthetic paths that bypassed classifiers trained to accept 91% of human mouse movements. Bypass rate: 89%.6
We were writing the exam and the answer key. Every time we improved our classifier, we were publishing a more detailed definition of "human." The attacker's only job was to meet that definition.
That sounds structurally unwinnable.
Fenn: Economically manageable. Different thing. You can't make detection perfect. You can make evasion expensive enough that most attackers go bother someone else. That was the real game. Cost imposition. And honestly, it worked. For twenty years, it mostly worked.
What changed?
Fenn: (long pause)
The question changed. We'd been asking "is this human?" for so long that nobody noticed it was a proxy for the actual question, which was "is this legitimate?" And for twenty years, those had the same answer. Every human was legitimate, every machine was suspicious. Clean binary.
Then AI agents showed up. Legitimate, authorized, doing real work for real companies. Arriving in headless browsers, running automation frameworks, connecting through VPNs.7 They look identical to the sophisticated bots I spent my career catching. Same technical profile. VPN usage hit one in five identification events in 2025. On Chromium desktops, one in three.8 Half of all web traffic is now automated.9
So I'm sitting here with a classifier that's very good at detecting non-human behavior, and the honest answer to "so what?" is: I don't know.
Does that feel like a failure?
Fenn: No. It feels like discovering you've been a very good locksmith on a building that's being converted into a park. The locks work. The locks are elegant. The building just isn't a building anymore.
What are you building now?
Fenn: (thin smile) Intent classifiers. Which is a fancy way of saying I used to ask "are you human?" and now I ask "what do you want?" Much harder question. And I'm not convinced the machine answers it any better than I could by just... asking.
Do you miss the old problem?
Fenn: Every day. It was clean. Humans curve, bots don't. Humans hesitate, bots don't. There was something almost beautiful about reading a mouse trajectory and knowing, really knowing, that a hand held that mouse. That a wrist turned. That a person was there.
Now I read a perfect trajectory and I have no idea what's behind it. A person, an agent, an agent acting for a person, a bot pretending to be an agent acting for a person. The signal is fine. My categories are the problem.
Last question. Was "human versus machine" ever the right frame?
Fenn: It was the right frame for the web we had. The web was built for humans reading documents. Of course the security model assumed human users. That wasn't a mistake. It was accurate. Accurate the way a photograph is accurate. True to the moment. Not true to what comes after.
In 2025, automated traffic exceeded human traffic on the web for the first time.9 The systems built to enforce the boundary between human and machine represent two decades of genuinely brilliant engineering. They now face a question they were never designed for: the web has legitimate non-human visitors, and the entire apparatus treats them as threats.
Footnotes
-
Canvas fingerprinting was formally documented by Mowery and Shacham at Web 2.0 Security and Privacy, 2012. See also Laperdrix et al., "Beauty and the Beast," IEEE S&P 2016. https://link.springer.com/chapter/10.1007/978-3-030-29962-0_28 ↩
-
Browser convergent evolution reducing fingerprint distinctiveness is documented in Vastel et al., "Fingerprint Surface-Based Detection of Web Bot Detectors." https://www.researchgate.net/publication/335823401 ↩
-
BotBrowser's unified cross-platform fingerprinting capabilities documented by GeeTest, 2025. https://www.geetest.com/en/article/how-to-defeat-botbrowser-in-2025 ↩
-
Mouse movement speed measurements from Chu et al., "Blog or Block: Detecting Blog Bots Through Behavioral Biometrics," Computer Networks 57, 2013. https://www.eecis.udel.edu/~hnw/paper/comnet13.pdf ↩
-
CNN-based detection vs. traditional methods from "A Deep Learning Approach to Web Bot Detection Using Mouse Behavioral Biometrics," CCBR 2019. https://link.springer.com/chapter/10.1007/978-3-030-31456-9_43 ↩
-
Bypass rates for generated mouse paths documented in "Mouse Dynamics Behavioral Biometrics: A Survey," arXiv, 2022/2024. https://arxiv.org/html/2208.09061v2 ↩
-
Virtual machines appeared in 13% of Chromium desktop identifications; developer tools open in 6% of sessions. Fingerprint.com 2026 Device Intelligence Report. https://fingerprint.com/blog/device-intelligence-report-2026/ ↩
-
VPN usage statistics from Fingerprint.com 2026 Device Intelligence Report. https://fingerprint.com/blog/device-intelligence-report-2026/ ↩
-
Automated bots comprising 51% of web traffic from Imperva Bad Bot Report, cited in Roundtable.ai analysis. https://roundtable.ai/blog/best-bot-detection-tools-methods ↩ ↩2
