In February 2024, a British Columbia tribunal ruled that Air Canada owed a grieving customer CAN$812.02 because its chatbot had promised a bereavement fare refund that didn't exist. The airline's defense was striking: it argued the chatbot was essentially a separate entity, responsible for its own statements. The tribunal didn't spend long on this.
"Liability is not avoided by automating the actions in question."
The damages were trivial. The logic was not.
Two years later, the enterprise world has settled on a metaphor for AI agents: team members. HBR published a piece urging companies to onboard agents the way they onboard employees, giving each one a defined identity, limited authority, and clear audit trails. Cisco's March 23 announcement used nearly identical language, mapping agents to "an accountable human manager." The framing is appealing because it borrows from something organizations already understand. A team member has a manager. The manager answers for the team member's work.
The metaphor flatters the technology. What makes human teams actually function is something subtle at every link in the delegation chain: the employee exercises judgment about whether the instruction makes sense. An analyst told to update a pricing record notices the number looks off and pauses. That micro-negotiation, that moment of friction, is where accountability actually lives. The pause before execution, the half-second of professional doubt. The "team member" framing implies the agent has something like professional judgment. It doesn't. And that absence is precisely the gap some new human role will eventually have to fill.
Agents don't pause. An agent that routes an approval incorrectly or updates a record with bad data acts at the speed of execution, not the speed of judgment. When something goes wrong, the trail leads to a configuration decision made weeks earlier by someone who may not remember making it.
The tooling is arriving faster than the organizational thinking. Microsoft's Agent 365, scheduled for general availability in May, offers visibility into agent behavior. Cisco is building identity frameworks for non-human workers. These are accountability instruments, and they presuppose an accountable person who doesn't yet have a title, a reporting line, or a place on the org chart. A Gravitee survey of 750 IT decision-makers found that only 22% of teams treat agents as independent identities. Over half of deployed agents operate without security oversight or logging. You can't hold someone responsible for a system they can't distinguish from the background noise of their own infrastructure.
The legal world has its default answer: the deployer is liable. The NTIA's AI Accountability Policy Report distributes responsibility across developers, deployers, and auditors. The EU's Product Liability Directive now covers intangible AI software. Courts will find someone to hold responsible. Whether that someone knows, today, that they're the someone is a different matter.
What's emerging in outline is a role without a name. More operational than "prompt engineer" or "AI ethics officer": the person who owns the consequences of a non-human actor's behavior in a specific business context. Someone who understands both the agent's logic and the organizational stakes well enough to answer for outcomes they didn't directly produce. You can see the shape of this role in the tooling being built around it. But the role hasn't crystallized, and the reason may be simple: it's a pure liability position. The tooling vendors are building dashboards for a person nobody wants to be. Owning agent outcomes means owning agent failures, and until organizations decide where that accountability sits deliberately, it will sit wherever the last incident happened to land.
Air Canada learned at a cost of $812 that you can't disclaim responsibility by pointing at the machine. The organizational question that case left unanswered is who, exactly, should have been pointing at the machine's configuration before it spoke.

