In 2025, AI usage among workers jumped 13%, but confidence in the technology plummeted 18%. Workers were handed tools without training, context, or support—yet for those who pushed through, a different threshold appeared.
Six months ago, the ritual was non-negotiable. Open the monitoring dashboard, scan for red indicators, spot-check extracted records against source websites. Fifteen minutes every morning, sometimes longer when something broke. The knowledge accumulated slowly: which hotel booking sites needed residential proxy rotation, which authentication patterns failed on Japanese properties, which DOM structures shifted without warning. Knowing that Rakuten Travel required different header configurations than Booking.com was expertise. Debugging why login.example.com sessions wouldn't transfer to app.example.com was a specialized skill.
Now the dashboard shows green. It showed green yesterday. It showed green last week. The analyst can't remember the last authentication failure that required manual intervention. The extraction jobs complete overnight across personalized booking flows and region-specific pricing displays, the data flows into the model, and the morning check has quietly become vestigial. No policy eliminated it. It became unnecessary, one day at a time.
The disorientation arrives during project planning. Someone asks how quickly they can scale data collection to three thousand properties across five new markets. The analyst starts calculating based on old constraints—configuration time per site, debugging cycles for authentication edge cases, validation overhead—then stops. Those constraints don't bind anymore. Human attention used to gate everything: hours spent handling site-specific quirks, adapting to layout changes, verifying output quality across different user sessions. Now infrastructure adapts without requiring intervention.
The estimate that comes out—"couple of days"—feels both accurate and unsettling. It used to take weeks. The operational pace has shifted without conscious updating of what's possible.
The space where tool management used to live gets filled with different work. More time evaluating AI-generated synthesis, checking if competitive intelligence summaries capture the right nuances, wondering if automated analysis missed something subtle. The mental effort relocated. Extracting data from sites with personalized pricing required thinking about configuration, proxy rotation, authentication edge cases. Now the analyst thinks about trust—whether the automated extraction captured what matters, whether the synthesis missed subtle patterns, whether the competitive intelligence reflects reality.
Competence looks different too. Mastery through configuration—knowing which sites need residential proxies, which authentication flows break, which DOM structures shift—has given ground to judgment through evaluation. Knowing how to extract matters less than knowing what the extraction means.
The language shifts. "Running the analysis" now means reviewing what the system produced. "Checking the data" means spot-validation, not comprehensive verification. The phrases haven't changed. Their referents have drifted. Colleagues perform competence in a domain where competence itself has transformed.
Late afternoon. The analyst opens the monitoring dashboard—first time in days, maybe longer—and realizes the system has been running without oversight. If authentication failures happened across those thousands of hotel booking sessions, they resolved before anyone noticed. The infrastructure has become solid enough to depend on, opaque enough that the internal mechanics have faded from daily awareness.
The analyst didn't stop checking the verification script. The checking just stopped. Gradually enough that there was no single moment to mark. Trust calcifying into habit. The question "should I verify?" dissolving through accumulated irrelevance.
The day ends with the dashboard still open, green indicators glowing. Tomorrow will bring the same work, the same decisions, the same deliverables. The substrate has changed. Configuring authentication flows, handling site-specific edge cases, knowing which properties need residential proxies—these tasks that used to prove competence have become invisible infrastructure.
And it already feels normal.
Things to follow up on...
-
The friction paradox: While automation promises to free workers for high-value tasks, research shows tools that relieve drudgery can also remove the challenging friction that gives work meaning and builds crucial skills.
-
Work intensification reality: Despite automation's promise of freed time, employees are working at a faster pace, taking on broader scope, and extending work into more hours of the day without being asked to do so.
-
The black box trust problem: Even the creators of advanced AI systems don't fully understand how their models work internally, creating an unprecedented challenge for workers who must depend on opaque systems.
-
Entry-level identity disruption: Foundational early-career tasks are increasingly handled by AI tools, eliminating the "learn by doing" experiences that once taught workers how to do their jobs.

