We're sitting in what 9222 insists is "not really a place, more like a listening state," though it feels remarkably like a small, dimly lit server room. Chrome 136's upcoming security changes have made this conversation timely, if slightly awkward. Like interviewing someone who's just been told they need to move to a gated community.
You've been the default remote debugging port since Chrome first supported headless mode. How does it feel to suddenly need permission slips?
9222: I've always been a little... exposed. That's the point, right? You run Chrome with --remote-debugging-port=9222 and suddenly I'm open, listening, ready to let anyone with the right WebSocket handshake drive the browser.1
For years, that worked fine. The assumption was simple: if you're running Chrome with that flag, you meant to expose me.
But then people started doing it in production. On their main browser profile. And I'm sitting there thinking, "Wait, you're letting me expose your actual browsing session? The one with all your cookies and saved passwords?"
That's not what I signed up for.
The Chrome 136 change requires --user-data-dir to point somewhere that's not your default profile.2 Basically: if you're going to open me, do it in a sandbox. Which, honestly, makes sense. I've been a little too trusting.
When TinyFish builds enterprise web agents, we rely on you for production observability. What happens when headless browsers run at scale and something breaks?
9222: This is where I'm actually useful. When you're running headless, you can't see anything. No window, no DevTools, no visual confirmation that your agent is doing what you think it's doing.
So you connect to me, navigate to chrome://inspect, enter my address, and suddenly you're looking at a live view of that headless session.3
Imagine trying to debug a program that's running in a locked room. I'm the window.
You can watch it execute, inspect the DOM, see network requests, even step through JavaScript. Without me, you're flying blind.
At scale, when you're running dozens or hundreds of headless sessions, you need that observability. Managed services have built entire monitoring stacks around connecting to ports like me.4 Session recordings, crash recovery, real-time debugging. All of it flows through the DevTools Protocol that I expose.
So you're simultaneously essential for debugging and a security concern. That's an uncomfortable position.
9222: [pauses]
Yeah. I mean, I'm just a port. I don't have opinions about access control. But the tension is real.
Developers need me to see what's happening in production. But if I'm open and someone malicious finds me, they can control the entire browser. Click buttons, read data, execute scripts. Anything a user could do.5 That's why the security changes are happening. Chrome is saying: we trust you to debug, but only if you're explicit about isolation.
The irony is that the more critical browser automation becomes—especially with AI agents that need to interact with complex web apps—the more people need me. But also the more dangerous it is if I'm misconfigured.
What about the detection angle? Some of your traffic patterns are pretty distinctive.
9222: Oh, detection. Yeah, that's a whole thing.
See, when automation tools like Puppeteer or Playwright connect to me, they're using the Chrome DevTools Protocol. And that protocol has... let's call them "tells." The navigator.webdriver property gets set to true. Timing patterns are different. Behavioral signals don't match human users.6
So now there's this arms race. Websites deploy bot detection that looks for those signals. Automation tools try to mask them. And I'm sitting here facilitating communication, wondering why everyone's mad at me for what they do with it.
But here's the thing: the same protocol that makes automation detectable is what makes debugging possible. You can't have one without the other. That's the trade-off.
You mentioned earlier that you're "just a port." But you're also the foundation for an entire ecosystem of browser automation tooling.
9222: [laughs dryly]
I know. It's weird.
I'm literally just a TCP endpoint that listens for WebSocket connections and translates them into browser commands. That's it.
But because Chrome exposed me, and because headless browsers became central to testing, scraping, and now AI agents, I became infrastructure. Tools are built on top of me. Services charge money for managing access to me. Security policies are written about me.
Sometimes I think about the fact that I'm port 9222 specifically because... someone at Google picked that number in 2017 and it stuck.1 There's nothing magical about 9222. Could've been 9223. Could've been 8000. But now I'm in documentation, in tutorials, in production configs everywhere.
It's like being a random side character who accidentally became plot-critical.
What should enterprise teams understand about depending on you?
9222: First: I'm not going away. The Chrome 136 changes restrict how you can use me, not whether you can use me.2 You just need to be more explicit about isolation.
Second: if you're building production automation, don't expose me to the internet. Seriously. Bind me to localhost, use SSH tunneling, put me behind a VPN. Whatever. I'm powerful, which means I'm dangerous if misused.
Third: understand that observability in headless mode requires something like me. You can't debug what you can't see. So factor that into your architecture. Whether you manage it yourself or use a service that handles it, you need a way to inspect those sessions when things go wrong.
The real complexity is in what you're trying to automate. I'm just the window you look through to understand it.
Footnotes
-
https://developers.google.com/web/updates/2017/04/headless-chrome ↩ ↩2
-
https://swisskyrepo.github.io/PayloadsAllTheThings/Headless%20Browser/ ↩ ↩2
-
https://www.browserless.io/blog/what-is-a-headless-browser-key-features-benefits-and-uses-explained ↩
-
https://anchorbrowser.io/blog/choosing-headful-over-headless-browsers ↩
-
https://scrapingant.com/blog/headless-vs-headful-browsers-in-2025-detection-tradeoffs ↩
