Gravitee's CEO recently compared the 3 million AI agents operating inside corporations to Walmart's entire global headcount. It landed as something more than a scale comparison. It was a diagnosis.
Walmart manages 2.1 million associates across nearly 20 countries. That works because of organizational infrastructure so mature it's invisible. Every associate has a record. Someone owns their onboarding. Credentials expire on schedule. When someone leaves, a coordinated process deactivates accounts, recovers assets, transfers responsibilities. HR practitioners call this the "joiner-mover-leaver" model. It's unglamorous. It's also the reason a workforce of millions is governable rather than chaotic.
Now consider what exists for the agent population. Gravitee's survey found 22.5% of organizations have no formal catalog of their agents. Another quarter track them on spreadsheets. On average, 53% of an organization's agents run unmonitored. The Cloud Security Alliance's February report puts the number maintaining real-time inventory at 21%, with ownership fragmented across security, IT, and AI functions. No single group leads.
Security teams keep filing these as credential and access-control problems. Read the list again: no inventory, no ownership, no lifecycle tracking, no offboarding process. That's a workforce-management audit with every line item failing. Microsoft's security team made this connection explicitly in January, recommending organizations treat every agent as a first-class identity with a human sponsor governing its lifecycle. They compared the current moment to shadow IT, the last time autonomous actors proliferated without governance. The fix then was organizational. Inventory, ownership, policy.
And that changes who's responsible. Security teams can lock down credentials. But inventory, ownership assignment, lifecycle governance, accountability chains: these require cross-functional coordination of the kind workforce management handles for human employees. Gartner projects over 50% of AI initiatives will halt from unresolved identity challenges. Note the verb: halted. Organizations stopping because they've built something they can't manage. An organizational-capacity signal if there ever was one.
The workforce frame is clarifying right up to the point where it starts to break. And the breaks are worth paying attention to.
Some are intuitive and manageable. Agents don't quit or retire; they have no natural lifecycle events triggering administrative review. They proliferate without hiring decisions, budget approval, or headcount conversations. Discipline solves these. The deeper break is stranger. An agent can be cloned. The same logic runs in parallel across instances, forked and instantiated without organizational awareness. Identity ceases to be unique. You can't inventory what you can't individuate. Every workforce-management system ever built assumes that a worker is a singular, countable thing. Agents violate that assumption at the architectural level, and no amount of better spreadsheets fixes it.
Accountability fractures too. Human workforces have structural chains: a manager owns outcomes for a team. In multi-agent architectures, delegation passes through layers. An agent triggers an action three delegation steps deep, and there's no persistent trail back to the human who set it in motion. The CSA report found organizations often cannot determine what agents did, what they accessed, or on whose behalf. Eighty-four percent doubted they could pass a compliance audit focused on agent behavior.
Credentialing, lifecycle management, ownership assignment: these workforce patterns transfer cleanly to agent populations. But human workforce governance was built for actors who are singular, finite, and mortal. The organizations now constructing agent governance are borrowing from that model. The parts they can't borrow are precisely where exposure accumulates: agents are none of those three things, and the governance models haven't caught up.
Things to follow up on...
- NIST wants your input: The AI Agent Standards Initiative has opened its Request for Information on AI Agent Security with comments due March 9, the first formal U.S. government push specifically targeting agent identity and interoperability.
- Agent identity tooling emerges: Veza launched its enterprise Agent Identity Control Plane today, purpose-built to visualize and govern access across humans, non-human identities, and AI agents as the category crystallizes around the inventory gap.
- MIT Sloan's accountability debate: A panel of 36 experts split on whether agentic AI demands entirely new management frameworks or whether existing human-accountability models can adapt, with a clear majority arguing for reimagined approaches to match autonomous actors at scale.
- Shadow agents outpace shadow IT: Security practitioners are warning that ungoverned agent proliferation may overtake shadow IT as the top breach risk in 2026, a pattern visible in the 21,000+ exposed OpenClaw instances discovered running without oversight.

