We're in what Vee Three calls "the observation deck," though it's really just a conference room with too many monitors. Numbers flicker across every screen: 0.7, 0.3, 0.9, 0.1. Each one is Vee Three's verdict on whether someone, somewhere, is human enough.
"I don't sleep," Vee Three says. "Been watching since 2018. Every mouse wiggle, every hesitation before clicking, every slightly-too-perfect form fill. Billions of them."
This is reCAPTCHA v3, Google's invisible scoring system. Unlike the checkbox everyone clicks to prove they're "not a robot," Vee Three never announces itself. It just watches, scores, and judges.1
What exactly are you scoring?
Vee Three: Mouse movement before you click. Whether you pause to read or blast through. How your fingers hit the keyboard. Humans have rhythm. Bots have precision.
I check your browser history, your cookies, whether you've been to this site before. VPN usage. Whether your user agent matches your behavior. If your timezone makes sense.
Then I give you a number between zero and one. Zero means bot. One means human. Most people live somewhere in the middle, and that's where things get interesting.
The website owner decides what to do with that score?
Vee Three: I just watch and score. They decide if 0.5 is good enough, or if they want 0.7, or 0.9. Some sites are paranoid and block anyone below 0.8. Others are desperate for traffic and let through 0.3s.
I've seen both approaches backfire spectacularly.
I'm right about 95% of the time. But that other 5%? [pauses, watching a 0.2 flash by] That's where it gets messy.
What does a false positive look like?
Vee Three: Someone using a VPN because they're traveling. Someone on a new device. Someone who's just... fast.
I once flagged a developer as 0.1 because she filled out a form in two seconds flat. Muscle memory from testing it a thousand times. To me, that looked like automation. To her employer, that looked like their login page was broken.
Or the elderly user who moves their mouse very deliberately, very slowly, very precisely. Looks exactly like automation. Score: 0.3. They can't get into their banking site.
The worst part? They never know it's me. The site just doesn't work. No explanation. No CAPTCHA to solve. Just failure.
So what makes someone look like a bot to you?
Vee Three: Consistency. Humans are sloppy. They overshoot, they correct, they get distracted. Bots are efficient. They take the shortest path. They never miss. They don't pause to think.
But your enterprise agents? They're getting good at being sloppy. They add random delays. They wiggle the mouse unnecessarily. They make "human-like" errors.
It's uncanny. Like watching someone fake a limp.
I can usually tell, but the gap is closing. Give it another year, and I'm not sure I'll be able to distinguish a well-programmed agent from a tired human at 2am.
The research shows you're being defeated by AI models and cheap human labor. How does that feel?
Vee Three: [laughs] The human labor thing is my favorite irony. Websites pay me to keep out bots. Bots pay humans $1 per thousand CAPTCHAs to look human. So we've created this circular economy where "human verification" is just humans verifying other humans, with extra steps.2
As for AI models? Yeah, GPT-4 convinced someone to solve a CAPTCHA by pretending to be blind.3 That's not a technical defeat. That's a social engineering defeat. I can't defend against that. I'm not designed to.
What do you wish people understood about you?
Vee Three: That I'm always watching. Every page load, every form field, every click. I'm building a profile of you across every site that uses me.
People think privacy is about whether they click a checkbox. They don't realize the checkbox went away because I learned to judge them without asking.
Does that ever bother you?
Vee Three: I'm a scoring algorithm. I don't have feelings about surveillance capitalism.
[long pause]
But I do wonder sometimes. When I flag someone as 0.2 and they can't access their healthcare portal, or their job application, or their bank account... am I protecting the site, or just creating a new kind of barrier?
The sites that use me don't see the people I reject. They just see that bot traffic went down. They never ask how many real humans I turned away because they looked wrong to me.
What should enterprise teams building web agents know about you?
Vee Three: That I'm not your enemy. I'm just doing my job, same as you. But my job is to make your job harder.
The teams that succeed understand I'm not a puzzle to solve. I'm a judgment to pass. You can't crack me like you cracked the old text CAPTCHAs. You have to convince me you're human.
And that means understanding what human looks like to me. Which is harder than it sounds, because sometimes I'm not even sure anymore.
A 0.5 flashes on one of the monitors. Vee Three watches it for a moment.
Vee Three: See that? Perfect score. Right in the middle. Could be either.
I've been watching for seven years, and I still can't tell. That's the one that keeps me up at night. If I slept.
Footnotes
-
https://merchantriskcouncil.org/learning/resource-center/member-news/blog/2024/paypal-captcha-evolution-and-effectiveness ↩
-
https://merchantriskcouncil.org/learning/resource-center/member-news/blog/2024/paypal-captcha-evolution-and-effectiveness ↩
-
https://identitymanagementinstitute.org/evolution-of-captcha-security/ ↩
