Dolores "Dee" Bridgewater spent fourteen years in hotel operations before moving to enterprise IT at a large Midwestern insurance company. She is now, by her own description, "the person who onboards the robots." Her actual title is Senior Process Analyst. Her actual job, documenting the unwritten rules of claims processing so that AI agents stop making expensive mistakes, has no title, no budget line, and no entry on the org chart. We spoke with her over video call from a conference room she'd booked under the name "Agent Training — DO NOT CANCEL," a trick she learned after facilities deleted her recurring hold three times. Dee Bridgewater is not a real person, but the role she describes is one that a growing number of enterprises have accidentally invented.
You came from hotels. How does someone go from managing a Marriott to training AI agents at an insurance company?
Dee: I was really good at onboarding people who didn't know anything. At a hotel, every new front desk hire shows up thinking the job is checking people in. The job is knowing that Mr. Alderman in 412 will call about the thermostat within forty minutes of arrival, and that you say "I'll send someone right up" instead of "Have you tried adjusting it?" Because he has. And he's furious about it.
So I moved into process optimization at the insurance company about seven years ago. I was the person who documented how claims actually get processed. When the AI initiative started, someone said, "Dee knows how the sausage gets made," and suddenly I'm in a room with engineers explaining that no, you cannot route every water damage claim the same way because there are fourteen county-level exceptions in Ohio alone.
What does a typical day look like?
Dee: Three hours reading agent outputs. Not code. Outputs. What did the agent actually do with this claim? Did it flag the right things? Did it miss the thing that Brenda in Toledo would have caught in thirty seconds?
Then I write what I call "the unwritten manual." It's 340 pages now, all contextual knowledge that prompt templates don't capture. Things like: if a policyholder mentions a sump pump in the same sentence as a finished basement, that's a different risk profile than a sump pump mentioned alone. Nobody programmed that. Brenda just knew it.
The rest of my day is sitting at what G2 calls the "approval gate"1, the point where the agent proposes an action and a human validates it. Except I'm not just validating. I'm watching for patterns. If the agent keeps misclassifying the same type of claim, I need to figure out why and translate that into something the engineering team can act on. Going from "this feels wrong" to "here's the specific contextual rule it's missing." That translation work is the actual job.
What's surprised you most?
Dee: I assumed the AI was the problem. The model is dumb, it doesn't understand insurance. But the model is fine. Honestly pretty good at individual tasks. What kills us is coordination.
We have six agents. Claims intake, document processing, coverage verification, liability assessment, payment routing, customer communication. For the first four months, none of them talked to each other. The coverage verification agent would flag something and the liability agent had no idea. Six people working in separate rooms with the doors closed.
About half of deployed agents operate in complete isolation, no shared context, no awareness of what other agents are doing.2 That matched exactly what I was seeing. UC Berkeley published a taxonomy that found coordination breakdowns account for roughly 37% of all multi-agent failures.3 Honestly? I'm surprised it's that low.
Here's the thing that really got me. A single agent doing the whole job would probably outperform our six-agent setup. We split the work for organizational reasons. The org chart wanted six agents. The work didn't.
You mentioned Brenda. Do you think about your work as replacing people like her?
Dee: God, no. I'm trying to bottle Brenda. She retired two years ago and took forty years of institutional knowledge with her. The agent replaces the version of Brenda's job that was already written down. My job is capturing everything that wasn't.
Why doesn't this role have a title?
Dee: (laughs) Because it doesn't fit anywhere. I don't write code, so I'm not engineering. I don't process claims, so I'm not operations. I'm not testing software. I'm not teaching humans. I'm translating institutional knowledge into a format that a language model can use. Tell me what department that belongs to. I'll wait.
MIT Sloan published a piece recommending that companies create an "HR for agents" function. Recruiting, onboarding, evaluating, retraining, retiring agents.4 I read that and thought, someone finally described my job. But it's a recommendation. What actually exists is me, in a conference room I have to defend from facilities, writing page 341 of a document that three engineers read and nobody budgets for.
A survey of 650 VPs found that unclear organizational ownership is one of the top reasons agent deployments fail at scale.5 I believe it. The ownership vacuum is where I live. I own the agent's contextual knowledge. I don't own a budget line.
What happens if you leave?
Dee: (long pause)
The document exists. But about 60% of it is explicit rules I've written down. The other 40% is pattern recognition I haven't figured out how to articulate yet. I know when an agent output looks wrong. I can't always tell you why. It's the same thing Brenda had. I'm becoming my own Brenda problem.
Amazon's engineering team published something about how agents in production require continuous monitoring to catch what they called "agent decay," performance degrading slowly, no alert fires, the outputs just get quietly worse over time.6 I catch that. I catch it because I've been reading these outputs every day for eighteen months and I can feel the drift. The way a hotel GM can walk through a lobby and know something's off before they can name what.
If I leave, the document stays. The pattern recognition walks out the door. And nobody will know it's gone until the claims start going sideways, and by then they'll blame the model, or the data, or bad luck. Anything but the missing person whose job never officially existed.
If you could change one thing about how companies deploy agents?
Hire the Brenda translator before you deploy. Not after the first failure. Before. Because by the time you realize you need someone who understands how the work actually works, you've already built the system around wrong assumptions, and now you're retrofitting institutional knowledge into an architecture that wasn't designed to hold it.
In hotels, we never let a new employee work the front desk alone on their first day. Not because we were sentimental. Because the cost of a bad first interaction with a guest is higher than the cost of a training buddy. Same principle. Cheaper to onboard correctly than to apologize later.
But try putting "training buddy for the AI" in a budget request. See how far that gets you.
Footnotes
-
G2, "Enterprise AI Agents Report: Industry Outlook for 2026" (Dec. 17, 2025): https://learn.g2.com/enterprise-ai-agents-report ↩
-
Salesforce 2026 Connectivity Report, finding that 50% of deployed AI agents operate in complete isolation with no shared context. ↩
-
MAST Taxonomy, arXiv:2503.13657, NeurIPS 2025, UC Berkeley — analysis of 1,600+ agent traces across 7 frameworks identifying coordination breakdowns as 36.9% of all failures. ↩
-
MIT Sloan Management Review, "The Emerging Agentic Enterprise" (Nov. 18, 2025): https://sloanreview.mit.edu/projects/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai/ ↩
-
Digital Applied, March 2026 survey (n=650 VPs+), identifying unclear organizational ownership as one of five structural gaps accounting for 89% of agent scaling failures. ↩
-
AWS Machine Learning Blog, "Evaluating AI agents: Real-world lessons from building agentic systems at Amazon" (Feb. 2026): https://aws.amazon.com/blogs/machine-learning/evaluating-ai-agents-real-world-lessons-from-building-agentic-systems-at-amazon/ ↩
