Somewhere inside NIST, two teams are deciding what "agent identity," "authorization scope," and "agent security" mean at the federal level. Those definitions will harden into compliance language, procurement requirements, and the tooling categories vendors build to. The vocabulary is still soft. It won't stay that way. And the first comment window closes in twelve days.
The RFI on AI Agent Security (docket NIST-2025-0035 on regulations.gov, closing March 9) scopes "agent" narrowly: systems that take autonomous actions affecting external state. Specifically, things that do things. The questions read like a design review checklist for infrastructure nobody's finished building. How do you test whether an agent is operating within authorized scope? What does a rollback mechanism actually look like? What human oversight controls work in production? They're asking for measurement approaches and empirical data.
The concept paper on Agent Identity and Authorization (closing April 2, comments via the project page or AI-Identity@nist.gov) goes deeper. NIST decomposes agent identity into four parts: identification, authorization, access delegation, and logging. Worth pausing on that. They separated "access delegation" from "authorization," which means they're already thinking about the chain-of-accountability problem. When an agent acts on a user's behalf, who is responsible for what it does? That's further along than most organizations, where roughly 22% treat agents as independent identity-bearing entities at all. Most still use shared service accounts or hardcoded credentials.
The concept paper also asks a question that anyone who's operated agents will have feelings about: can existing standards like OAuth 2.0, SPIFFE/SPIRE, and OpenID Connect extend to cover agents, or does something new need to exist?
OAuth was designed for a world where a human clicks "Allow" and a service acts within a defined scope. An agent that checks a competitor's price for thirty seconds and disappears is one kind of identity problem. An agent that monitors inventory across thousands of listings for weeks, accumulating context and permissions along the way, is a completely different one. The concept paper asks directly whether agent identities should be ephemeral or persistent, and what metadata is essential. Design choices here determine whether future identity infrastructure matches how agents actually behave or encodes assumptions that break on contact with production. If you've built agent authorization and discovered something OAuth doesn't capture, that's the kind of comment this process needs.
NIST says explicitly: you don't need to address every question. Find the ones where you have specific experience, describe what you've seen, explain what worked or didn't. Production experience and empirical data carry the most weight. Everything submitted becomes public record, so skip anything confidential.
NIST is also hosting listening sessions in April for healthcare, finance, and education. Interest deadline is March 20 via caisi-events@nist.gov.
March 9 — RFI on AI Agent Security (submit via regulations.gov, docket NIST-2025-0035). March 20 — Interest deadline for April listening sessions (caisi-events@nist.gov). April 2 — Concept paper on Agent Identity and Authorization (submit via project page or AI-Identity@nist.gov).
When NIST ran the Cybersecurity Framework process, RFI responses directly set the agenda for subsequent workshops and drafts. That process incorporated over 15,000 comments across its development. The agent identity process is near its beginning. The categories haven't solidified. Fewer practitioners engaging means the framework ends up reflecting how things should work on paper. Production breakage stays unrepresented. This is a design review on infrastructure that doesn't exist yet. Twelve days on the first window.
Things to follow up on...
-
The governance gap, quantified: Gravitee's State of AI Agent Security 2026 report found that 53% of the three million agents operating in US and UK corporations are ungoverned, with only 14.4% of teams having full security approval for their deployments.
-
Agent identity tooling emerges: Veza launched its Access Agents platform today, positioning itself as an agent identity control plane that maps identity risks to the NIST AI Risk Management Framework across humans, non-human identities, and AI agents.
-
MCP moves to neutral ground: Anthropic donated the Model Context Protocol to the Linux Foundation's new Agentic AI Foundation, shifting MCP from a single-company project to industry-governed infrastructure with 97 million monthly SDK downloads and 10,000 active servers.
-
Supply chain attacks already here: Security researchers found malicious skills on the OpenClaw public registry, with Censys identifying over 21,000 exposed instances and Trend Micro disclosing active botnet discussions targeting agent skill registries.

