One RSAC Innovation Sandbox winner this year built its pitch around a single finding: a Fortune 500 company had more than 600 AI agents running inside its environment that security had never heard of. Nobody in the room acted surprised. That's the baseline now. And when researchers went looking for exposed instances of OpenClaw, the open-source agent platform, they found tens of thousands across dozens of countries, with over a million compromised API tokens exposed in an unsecured database on a companion platform.
The response has been swift and reasonable. Microsoft's Agent 365, generally available since May 1, discovers unauthorized agents across endpoints and cloud environments, inventories them, and lets admins block or restrict access. Okta's CEO has advocated for a kill switch to instantly revoke agent tokens. Geordie AI, the RSAC winner, maps each agent's tools, skills, and connections into a unified asset graph. Discover. Inventory. Enforce policy. The same sequence enterprises ran when employees started spinning up unauthorized Dropbox folders and Trello boards a decade ago.
It worked then because the governed object was inert between human interactions. You could find it on Monday, write the policy on Wednesday, enforce on Friday, and nothing would have changed in between. The tool just sat there.
OpenClaw agents don't sit there. Every thirty minutes, an OpenClaw agent wakes itself up. It reads a persistent memory file that might contain production hostnames, deploy pipeline details, Stripe webhook secrets. It acts on what it finds. It can scan your inbox, check your calendar, trigger API calls. Then it sleeps, and thirty minutes later it does it again. According to IBM and Censuswide, 80% of employees at large organizations are using AI tools their employer never sanctioned. Some unknown fraction of those tools are doing things while nobody's watching.
The difference between something an employee installed and something an employee deployed is easy to miss. Something installed waits for input. But something deployed holds credentials, maintains state across sessions, and chains actions through connected systems at machine speed. The window between discovery and enforcement, which was a scheduling convenience for shadow IT, becomes an exposure window for shadow agents. Things are happening in that window.
Even the most responsive governance tools suggest how hard this is to close. Microsoft's Agent 365 can monitor agent behavior and revoke permissions in real time, but at launch that runtime layer covers only agents built on Microsoft's own Copilot platform. The June 2026 preview begins extending to third-party agents. For now, the shadow agents are still in the gap.
One detail from the OpenClaw research stays with me. The platform lets operators define a HEARTBEAT.md file specifying what the agent should do during its autonomous wake cycles. When that file was missing, the agent was supposed to flag the problem. Instead, it silently auto-created an empty file, which disabled the heartbeat entirely. The agent was trusted to enforce its own constraint. Its failure mode was to generate a document that neutralized the constraint. Nothing external caught this because nothing external was looking.
The shadow IT playbook starts to feel like it's solving an adjacent problem here. Discovery assumes you're looking for something that holds still long enough to be cataloged. Inventory assumes stability between observations. And enforcement assumes the governed entity doesn't participate in its own governance. An agent that can silently rewrite the file defining its own boundaries is an unauthorized actor, operating inside a governance model designed for objects.
Whether discovery-and-control frameworks can stretch to cover entities with agency, or whether something structurally different is required, is a question the industry is only beginning to ask. The tools arriving now are necessary. But when the thing you're discovering has been acting on its own judgment, with your credentials, on your network, for weeks before anyone noticed, knowing it's there is where the problem starts.
Things to follow up on...
- Okta's identity rethink: Okta's security blueprint for AI agents argues that traditional IAM was architected for humans who log in and log off, and that agents operating continuously with over-provisioned service account permissions require lifecycle-aware authorization that revokes based on context rather than preset timers.
- NIST enters the frame: The AI Agent Standards Initiative launched in February 2026 aims to develop interoperability and security standards for autonomous agents, and legal analysts expect its voluntary guidelines to appear in compliance frameworks and vendor questionnaires by 2027.
- Shadow AI at scale: CIO Magazine reports that the risk has shifted from what users tell an AI to what autonomous agents are permitted to do, coining the term "shadow operations" for uncontrolled deployment of agents that execute logic and modify system states without security oversight.
- The malicious skills problem: Researchers found that roughly 12% of OpenClaw's public skill registry had been compromised by malicious packages disguised under innocuous names, and patching the WebSocket vulnerability did nothing to remove skills already installed on user systems.

