Every enterprise deploying browser agents at scale has someone like Mags Kettleburn. You just can't find them on an org chart. Kettleburn is a systems engineer at a large U.S. retailer, where she manages the proxy configurations, fingerprint profiles, and session infrastructure that keep several hundred browser agents running against competitor sites. Her official title is "Data Infrastructure Engineer." Her actual job is making sure the company's automated pricing intelligence doesn't get blocked, poisoned, or silently broken.
The role didn't exist three years ago. It still doesn't, officially.
We spoke with her, or someone very much like her. Kettleburn is a composite who insisted on a pseudonym that, in her words, "sounds like someone who'd actually end up in this job."
How did you end up doing this?
Mags: By not saying no fast enough. I was on the DevOps side, and in late 2023 the data team spun up their first batch of browser agents to monitor competitor pricing. Day one, literally day one, they hit Cloudflare's bot detection and everything returned 403s. Someone said, "Mags knows about browsers," which was generous. I knew about deploying browsers. Not about making them look human.
But I fixed it. Then I fixed the next thing. Then Chrome updated and I fixed that. Two and a half years later, same "temporary" assignment. I keep waiting for someone to notice I never went back.
What does a typical week look like?
Mags: There's no typical week, but there's the Chrome cycle, which is the closest thing to a heartbeat. Chrome pushes a major update roughly every four weeks1, and when it does, the TLS fingerprint changes. JA3, JA4, these are hashes of how the browser negotiates its encrypted connection. Anti-bot systems maintain databases of what legitimate browser fingerprints look like2. If my agents claim to be Chrome 126 but their TLS handshake looks like Chrome 124, that's a block. And it happens before any webpage loads. Before JavaScript runs. Before my carefully crafted cookie stores matter at all.
So every four weeks, I'm updating fingerprint profiles across the fleet. And it's not just the TLS layer. The User-Agent string, the GPU renderer info, screen dimensions, the font list, all of it has to tell a coherent story. Cloudflare cross-references these signals3. You can't just swap one value.
It's like maintaining a wardrobe for two hundred fictional people, and every month the fashion changes.
What makes a bad week?
Mags: A target site upgrades its bot detection. That's the step change. Last year Google deployed SearchGuard and basically overnight invalidated months of work across the industry. SerpApi's CEO said publicly they were spending ten times the resources and Google was still twice as slow to scrape4. That's not our target, but the same thing happens with retail sites. One of our top competitor portals moved to a new DataDome configuration, I think it was March, and our success rate dropped from around 85% to maybe 12% in an afternoon.
No alert fired. The agents ran. The pipeline reported completion. The data just... wasn't there. Or worse, it was there but wrong.
Hold on. No alert fired?
Mags: Right. This is the part that keeps me up. When you get blocked with a 403, that's easy. You see the error. But the sophisticated defenses don't block you. They serve you different content. Fake prices. Empty product pages that look like the product was discontinued. The agent completes its task, reports success, and the pipeline ingests garbage.
I've started calling it "green-light failure." Everything looks healthy. The dashboards are clean. And somewhere a pricing analyst is making decisions based on numbers that aren't real.
How do you catch it?
Mags: Paranoia. I run spot checks. I have a few scripts that compare agent-collected prices against a small sample I pull manually through a regular browser. If the delta is too high, I dig in. But it's artisanal. There's no product for this. I'm basically doing quality assurance on reality.
The research on 2FA as a bottleneck for agent systems is pretty grim. What's your experience?
Mags: [laughs] "Bottleneck" is one word for it. So, a lot of the portals our agents access require login, and most of those logins have two-factor authentication now. Great for security. Absolutely devastating for automation.
You have three options, and they're all bad. One: disable 2FA on the service accounts. Security hates this. I hate this. But it's what half the industry does quietly5. Two: build a human-in-the-loop interrupt where the agent pauses at the 2FA screen and someone, usually me, manually enters the code. Which means "fully automated" is a polite fiction. I have a Slack channel that pings me when an agent hits an OTP wall, and I've got about ten minutes before the session times out6. Three: share the TOTP secrets directly with the agent infrastructure, which, from a security standpoint, means you've handed your robot the keys and hoped nobody compromises the robot.
I usually end up with some combination of all three, depending on the portal, depending on how much security is paying attention that quarter.
Who do you report to?
Mags: [pause] Technically, the data team. My manager is the head of data engineering. But the work I do is infrastructure, closer to what IT does. And the decisions I make about credential management and 2FA workarounds are security decisions. I've been in meetings where IT says "that's a data team problem," security says "that's an IT problem," and the data team says "we just need the numbers."
I'm in the part of the Venn diagram where nobody wants to be.
Does the role have a career path?
Mags: To what? There's no "Senior Fingerprint Profile Manager" title. There's no conference talk I can give. "Hi, I'm Mags, I spend 40% of my time7 making robots look human so my company can see what Target is charging for laundry detergent." I mean, I'd watch that talk. But it doesn't exist in any career framework I've seen.
The pricing intelligence our agents collect drives real decisions. Millions of dollars in margin. But the work that makes it possible is invisible. When it goes right, nobody knows I exist. When it goes wrong, it goes wrong silently, and then six weeks later someone notices the pricing data looks off and blames the model.
The model gets blamed?
Mags: Always. The AI is the most legible explanation. "The agent made a mistake." Nobody's first instinct is "the website was lying to us." There's no mental category for that in most organizations. I've tried explaining that a site can detect our agents and serve fake data, and people look at me like I'm describing a conspiracy theory. But it's just how bot detection works8. It's not even controversial. It's documented.
What would change your situation?
Mags: A name.
If this function had a name, "web identity management," "agent authenticity infrastructure," whatever, then it could have a budget. It could have a team. Right now it's just me and a very long spreadsheet of proxy configurations and a calendar reminder that says "Chrome update" every four weeks. The work will exist whether or not anyone names it. But naming it is how it stops being one person's problem.
Eighty percent of enterprise applications now embed an agent capability, but only 31% of organizations actually run one in production9. That 49-point gap has a human texture to it. It's filled by people like Kettleburn, doing unnamed work at the seam between what agent systems promise and what the web actually permits. The role has no title because naming it would mean admitting the automation isn't fully automated. It has no budget because the cost of not having it arrives too late to trace back. And it has no career path because the organizations that need it most haven't yet built a category for the problem it solves.
Footnotes
-
Chrome follows a roughly four-week major release cycle. Fingerprint allowlists must update on the same cadence. Browserless, "TLS Fingerprinting," December 2025 ↩
-
JA3/JA4 fingerprints are hashes derived from TLS handshake parameters. Anti-bot systems maintain databases matching these to known browsers. Cloudflare Bot Solutions Documentation ↩
-
Cloudflare validates that JA3 fingerprints match the browser claimed in the User-Agent header. Cloudflare Blog, "JA4 Fingerprints and Inter-Request Signals," October 2025 ↩
-
SerpApi's CEO reported at a 2025 data extraction conference that Google was now twice as slow to scrape despite 10x resource expenditure. Kadoa, "What is Web Scraping? Enterprise Use Cases for 2026," February 2026 ↩
-
Authn8 documentation notes that "some teams disable 2FA on shared accounts" as a common workaround. Authn8, "Give AI Agents Secure 2FA Access," December 2025 ↩
-
Default email OTP validity windows are typically 10 minutes. Kore.ai Platform Documentation ↩
-
Enterprise data teams routinely report spending 30–40% of engineering hours on scraper maintenance. Apify/Web Scraping Club, "2026 State of Web Scraping Report," cited via Kadoa ↩
-
Malicious bots accounted for 37% of all internet traffic in 2024, up from 32% the prior year, driving increasingly aggressive detection and response. Imperva, "2025 Bad Bot Report" ↩
-
80% of enterprise applications embed an agent capability, but only 31% of organizations run one in production — a 49-point gap. Digital Applied, "AI Agent Adoption 2026," April 2026 ↩
