You click a button. Nothing happens. You click again. Still nothing. Then, three seconds later, both clicks register at once and you've accidentally submitted a form twice. The page looked ready—button visible, styled, perfectly clickable. But it wasn't actually connected to anything yet.
This is the hydration gap. Modern websites send you a complete page, but none of it works until JavaScript finishes wiring everything up. The page exists in two states simultaneously: the HTML you can see, and the interactive version JavaScript is still building. Between those states lies a gap that's invisible when browsing one site, but becomes operationally unanswerable when automating thousands.
The Two-State Page
Hydration is how JavaScript frameworks take server-rendered HTML and make it interactive. The server sends complete HTML. You see everything. But buttons have no event handlers attached yet. Forms won't submit. The JavaScript bundle needs to download, execute, and "hydrate" the static HTML by connecting all the interactive behavior.
Developers call this period the "uncanny valley". Users think they can interact, but nothing responds until JavaScript finishes. On fast connections, this might take 200 milliseconds. On mobile devices with slow networks, it can easily take 10 seconds. The user sees everything, assumes it's ready, tries to interact. Nothing happens.
Modern websites send content twice—once as HTML, once as JavaScript. The client rebuilds the exact same DOM tree the server created just to attach event handlers.
The client downloads JavaScript, parses it, executes it, and rebuilds the exact same DOM tree the server already created. Just to attach event handlers. This recovery phase is pure overhead. The client duplicates work the server already did.
When Thousands of Sites Lie Differently
At TinyFish, we run web agents across thousands of sites simultaneously. The two-state reality compounds in ways that break standard automation approaches.
Site A hydrates in 200 milliseconds on desktop, 8 seconds on mobile. Site B uses progressive hydration. Forms work immediately, but navigation doesn't. Site C has a hydration mismatch that only surfaces under load, silently breaking components. When you're running 10,000 concurrent sessions, these aren't edge cases. They're your operational baseline.
Here's what happens:
- You can't predict when pages are ready, so you wait. Arbitrary timeouts waste resources. Wait too short, you interact before handlers attach and trigger retries. Wait too long, you burn CPU cycles on nothing.
- You can't rely on browser events.
loadfires when HTML arrives,DOMContentLoadedwhen DOM parses, but neither tells you when JavaScript finished hydrating. - You need to wait for framework initialization, event handler attachment, state recovery. None of which have standard signals.
Then frameworks disable the checks that would help. In Vue's development mode, the framework verifies client rendering matches server rendering. Discrepancies stop hydration and trigger re-render. But in production mode, this check is disabled for performance. The framework assumes everything matches and proceeds even when it shouldn't.
Hydration errors that would be caught in development slip through to production. Sometimes there's a console error. Sometimes there isn't. Components just stop working. One site's hydration failure consumes extra CPU re-rendering everything. Another leaves handlers unattached, breaking automation. A third enters an error state requiring full page reload. Each failure mode requires different handling, different retry logic, different resource management.
The Browser's Broken Mental Model
The hydration gap reveals something about how the browser's mental model broke. Browsers were designed for a world where "loaded" meant "ready." JavaScript frameworks introduced this gap, this period where everything looks ready but isn't, and there's no standard way to bridge it.
Consumer browser automation breaks down at enterprise scale because it assumes "page load" means "ready to interact." It assumes resource consumption is predictable. It assumes failures are detectable. None of these assumptions hold across thousands of modern JavaScript applications.
The page looks ready but isn't. This gap between what users see and what actually works used to be a minor UX annoyance. At enterprise scale, it's an infrastructure problem requiring you to treat "page load" as a multi-phase process with no reliable completion signal.
Your automation either handles one site's hydration timing or it doesn't. But infrastructure that works across thousands of sites needs to handle each one lying differently about when it's actually ready.
Things to follow up on...
-
React's improved diagnostics: React now provides detailed hydration error messages with component stack traces showing exactly where in the hierarchy mismatches occur, making production debugging significantly easier.
-
Qwik's resumability approach: Instead of traditional hydration, Qwik serializes application state on the server and resumes execution only on user interaction, achieving near-instant time to interactive by skipping the hydration step entirely.
-
Twitter's 2020 migration: When Twitter re-engineered its mobile site from SPA to server-rendered application, they achieved a 30% reduction in Time to First Byte and improved performance on slower networks.
-
Google's rendering guidance shift: As of 2024, Google no longer recommends dynamic rendering for SPAs, instead suggesting server-side rendering, static rendering, or hydration for SEO compatibility.

