The following is a hypothetical interview. Priya Castellano is a composite character — not a real person, but a construction drawn from verified industry data, regulatory filings, and the reported experiences of enterprise technology leaders in financial services. Every statistic cited is real. The person citing them is not. We think the conversation is more honest for it.
Priya Castellano sounds like someone running on caffeine and institutional memory. She insisted on a phone call rather than video, citing what she called "screen fatigue that has become a personality trait." She'd just come from a meeting with her firm's chief compliance officer. "Brisk," she said, when I asked how it went. She did not elaborate.
Castellano is VP of Enterprise Technology at a mid-size broker-dealer in Chicago. She grew up in a family of accountants, studied computer science to escape the profession, and now spends most of her time on compliance. "My mother finds this extremely satisfying."
In late February, after reading early coverage from RSA Conference 2026, she initiated an internal audit of AI agent usage across her firm. What she found, in her words, "changed my quarter."
What prompted the audit?
Priya: A single sentence in a Microsoft security blog. It said most organizations are deploying AI agents before establishing appropriate controls for access management, data protection, or accountability.1 I read that and thought, that's not us, we have controls. Then I sat with it for about thirty seconds and realized I had no idea whether that was true. I knew we had controls for the things I knew about. Circular reasoning. Very comforting until you notice it.
So I asked my team a simple question: how many AI agents are currently operating inside our environment?
The silence that followed was very specific. Not "we don't know the number." More like "we don't know what you mean by the question."
What did you find?
Priya: Forty-three. Six departments. Some were sophisticated: chained API calls, persistent access to client data systems, automated reporting loops. Some were basically glorified macros with ambition. But none of them had been through our standard technology onboarding process. None had an identity in our IAM system. None were logged.
In a regulated firm. How?
Priya: Because our IAM system governs people. It governs applications that go through procurement. What it doesn't govern is a portfolio analyst who connects an AI agent to our CRM through an API key they generated themselves. The agent isn't a person, it isn't a purchased application. It just... exists now. Entro Security's CEO described it well: enterprise AI adoption rarely starts with a strategy deck, it starts with a connection.2 Forty-three connections that nobody made a decision about.
The industry data says non-human identities outnumber human ones 40-to-1 in some enterprises.3 I used to think that was an exaggeration.
What was your first reaction?
Priya: Terror. Physical terror. Here's what people outside financial services don't fully appreciate: FINRA's 2026 oversight report expects firms to retain records related to AI-assisted decisions and assign human accountability for AI outputs.4 Same standards as human-created content. So forty-three agents running for weeks or months without logging means a retroactive records gap. I cannot go back in time and create the audit trail that should have existed.
The bell is rung.
Could you shut them down?
Priya: My compliance officer's first instinct. Mine too, briefly. But one team's entire client reporting process ran through an agent nobody in IT had provisioned. You shut that down Tuesday morning and clients don't get their reports Tuesday afternoon. In financial services, operational continuity is also a regulatory requirement.5
So you're sitting there with this beautiful paradox: you can't leave them running because of compliance, and you can't shut them down because of compliance.
I've started calling it Schrödinger's remediation. Nobody laughs when I say that. Which tells me the joke is landing exactly right.
How did leadership respond?
Priya: [long pause] I want to be careful here. A Gravitee survey found that 90% of organizations pressure security teams to loosen identity controls to accelerate AI initiatives.6 I will say that number did not surprise me.
There's a version of this conversation where a senior leader says, "This is great, our people are innovating, let's just get governance around it." Sounds reasonable. Except "getting governance around it" for forty-three undocumented agents is a remediation program, not a memo. And somehow the person who surfaced the problem becomes the person responsible for fixing it, on a timeline set by someone who thinks that sentence means something.
Cisco talked at RSA about "onboarding" agents like employees. Does that framing resonate?
Priya: The metaphor is useful because it forces business leaders to think about identity, accountability, access controls.7 Where it falls apart is that employees don't onboard themselves. These agents did. They showed up, got access, started working, and nobody in HR processed the paperwork.
You're not onboarding. You're doing retroactive immigration reform.
Has this changed how you think about your role?
Priya: I used to think my job was to govern the technology decisions the organization makes. Now I think it's to govern the technology decisions the organization doesn't realize it's making. The first job has a process. The second one... I'm still figuring out what to call it. Something closer to ecology than engineering. You're managing a system that grows whether you're watching or not.
What keeps me up at night, and I mean this literally, is that we found forty-three because we looked. One in four compliance audits in 2026 is expected to include AI governance inquiries.8 How many firms haven't looked yet?
What would you tell a peer at another firm?
Priya: Look now. Before someone else looks for you. And when you find what you find, resist the urge to treat it as a disciplinary problem. The people who deployed these agents weren't being malicious. They were being productive. That's what makes it so hard.
The threat is coming from inside the competence.
Any regrets about initiating the audit?
Priya: Every single day.
[pause]
No. Ask me again in six months.
Footnotes
-
Microsoft Security Blog, "80% of Fortune 500 use active AI Agents," February 10, 2026. https://www.microsoft.com/en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/ ↩
-
Security Boulevard, "Entro Security Launches AGA to Govern AI Agents," March 2026. https://securityboulevard.com/2026/03/entro-security-launches-aga-to-govern-ai-agents-and-non-human-identities-across-the-enterprise/ ↩
-
Medium/Raktim Singh, "AI Agent Identity & Zero-Trust: The 2026 Playbook for Banks," January 5, 2026. https://medium.com/@raktims2210/ai-agent-identity-zero-trust-the-2026-playbook-for-securing-autonomous-systems-in-banks-e545d077fdff ↩
-
Smarsh, "FINRA 2026 AI Governance: Managing Agentic and Shadow AI Risks." https://www.smarsh.com/blog/thought-leadership/finra-2026-oversight-priorities-ai-communications-fraud/ ↩
-
Noma Security, "Shadow AI Agents: The New Enterprise Security Threat." https://noma.security/resources/shadow-ai-agents-enterprise-risk/ ↩
-
Gravitee 2026 State of AI Agent Security report, cited in pulse research. ↩
-
UC Today, "RSA 2026: Cisco Strengthens AI Agent Security Offering." https://www.uctoday.com/security-compliance-risk/rsa-2026-cisco-strengthens-ai-agent-security-offering-to-meet-surging-adoption-demands/ ↩
-
ITECS, "Agentic AI Governance Framework 2026." https://itecsonline.com/post/agentic-ai-governance-2026-guide ↩
