Ninety percent of organizations pressure their security teams to loosen identity controls for AI. That number comes from Delinea's March 2026 survey of over 2,000 IT decision-makers actively using or piloting AI, and the instinct is to read it as a security failure. Look closer and you see organizational incentives doing exactly what they were designed to do.
Two numbers from a companion Gravitee survey of 750 CIOs and engineering leaders: 80.9% of technical teams have moved past planning into active testing or production with AI agents. Only 14.4% report all agents going live with full security and IT approval. That gap is the predictable output of two functions optimizing for different things, each performing as designed.
Business units shipping agents are responding to competitive timelines and quarterly targets that already assume AI-driven productivity gains. Security teams flagging those deployments are responding to the knowledge that most deployed agents operate without the oversight infrastructure to catch problems before they compound. Both sides are behaving rationally. Their reward structures just point in opposite directions.
The structural predecessor is shadow IT. Through the early 2010s, employees adopted unauthorized tools faster than IT-approved channels could accommodate them. Studies suggest up to 40% of IT spending eventually flowed through unsanctioned channels. Security teams tried enforcement. Enforcement mostly failed. What eventually worked, where anything worked, was governance redesign: formal evaluation paths that gave business units a legitimate way to deploy tools while security got structured visibility.
The AI agent version of this tension is harder to resolve, and for a specific reason. Shadow IT tools were passive. A personal Dropbox account stored files in the wrong place; the risk was data leakage through unauthorized storage. Agents update records, route approvals, initiate purchasing decisions. The risk compounds differently when the unauthorized tool can take consequential action, at speed, in production.
The Delinea report surfaces something it calls the "AI security confidence paradox," and the numbers are worth reading carefully.
| What organizations say | The number |
|---|---|
| Identity security posture is ready for AI-driven automation | 87% |
| Identity governance around AI systems is deficient | 46% |
| Cannot always explain why a non-human identity performed a privileged action | 80% |
These numbers coexist comfortably if you understand what's being measured. The confidence reflects identity security as organizations have practiced it for years, with human users and familiar access patterns. The deficiency reflects the specific, novel challenge of non-human identities operating at machine speed. Organizations are measuring their readiness with a ruler built for a different problem.
Better tooling helps at the margins. A security team with a nicer dashboard still needs organizational backing to slow down a deployment that a business unit is racing to ship. A business unit leader whose bonus depends on shipping an agent workflow by Q2 and a security director whose performance review hinges on zero critical incidents are not going to resolve their disagreement because someone improved the monitoring interface. They need a shared definition of what "deployed responsibly" means, with organizational weight behind it.
The shadow IT era eventually produced something like that. It took years, and it arrived after enough incidents accumulated that prevention became cheaper than cleanup. Whether organizations can afford the same learning curve with AI agents is genuinely unclear. Shadow IT stored files in the wrong place. AI agents make decisions in the wrong way, at machine speed, with organizational authority. The exposure is categorically different.

