In March, a federal judge in San Francisco granted Amazon a preliminary injunction against an AI shopping agent called Comet. The agent browsed Amazon on behalf of users, compared prices, completed purchases. The ruling turned on a distinction that sounds technical but carries real weight: a user giving an agent permission to act on their account is not the same as the platform giving that agent permission to access its systems. Your authorization and Amazon's authorization are legally different things.
The injunction is currently stayed on appeal. Comet is operational again. But the reasoning underneath it sits there, unresolved. If agents need platform authorization to transact, and platforms can revoke it under federal computer fraud law, then agents occupy a genuinely strange commercial position. They can spend money, compare prices, and complete purchases. They have no independent legal standing to do any of it.
Legal systems have absorbed new kinds of economic actors before. The absorptions tend to be ugly.
Corporate personhood is probably the closest analogy. The doctrine that corporations have constitutional rights traces back to a headnote written by a court reporter in 1886, summarizing Santa Clara County v. Southern Pacific Railroad. The Court's actual opinion dodged the personhood question entirely. But the headnote got cited as precedent, and within a decade over 230 state laws regulating corporations had been struck down. A margin note became the foundation of corporate law. The accountability framework lagged commercial reality by a generation.
Agents are in a version of that gap period now, and the gap is wide. The University of Chicago Law Review put it directly:
"Large swaths of liability law turn on whether the actor who causes harm had a certain intention. Agents don't have intentions."
If liability requires intent, agents may be functionally immune from it. Contract law isn't much help either. Forming a binding agreement generally requires legal personhood, and an agent's authority to accept terms on behalf of a business gets murky fast when the agent has autonomously identified which systems to interact with.
The EU AI Act becomes fully applicable August 2, 2026. It will classify most autonomous agents in high-impact commercial sectors as high-risk systems, triggering requirements for audit trails, human oversight, and identity management throughout the agent lifecycle. But the Act regulates deployers and providers of AI systems. It assigns responsibility to humans and organizations. So when the causal chain between a human decision and an agent's action stretches across multiple systems, multiple jurisdictions, and multiple intermediaries who may not even know the agent exists, responsibility diffuses. The Bank of England flagged earlier this year that literal human-in-the-loop oversight becomes unworkable as agentic systems proliferate. NIST is still working through whether existing identity frameworks apply to non-human actors in commercial contexts. That absence is worth noting on its own.
The honest read is that nobody has solved this. The infrastructure for agent commerce is being deployed now. The accountability framework is still a draft. And if the corporate personhood story suggests anything, it's that the resolution probably won't arrive through careful deliberation. It will arrive through a crisis, a court reporter's margin note, or a regulation written years after the damage. The gap is where the exposure lives.

