When a person evaluates a vendor, they draw on signals that resist easy description. Brand recognition. A colleague's offhand recommendation. The feel of a well-designed website. A gut sense built from years of buying decisions. Imprecise, all of it. And all of it works.
Agents can't use any of it.
An AI agent comparing vendors needs trust signals that are machine-readable, verifiable, and computable. Does this merchant hold the certifications it claims? Is its pricing current? Will it honor published terms? For a human buyer, these questions get answered through experience and the slow accumulation of market knowledge. For an agent, they need to resolve into structured data, cryptographic proofs, or attestations from a recognized third party.
That third party gains leverage that outlasts any single transaction. Whoever defines what "trusted" means for agent commerce shapes who gets to participate at all.
The most concrete implementation so far is Visa's Trusted Agent Protocol, unveiled in October 2025 and developed with a major CDN provider. TAP creates a framework where AI agents are "onboarded" and "certified," then use cryptographic signatures to prove their legitimacy to merchants. Hundreds of agent-initiated transactions have been completed. Over 100 partners are involved.
It's genuinely useful infrastructure. It also positions a payment network as the root of trust for an entire category of commerce. Agents that aren't certified can't participate. The architecture is open. The gate has a keeper.
This pattern has precedent. Certificate authorities once numbered in the hundreds. Consolidation left a handful dominating the market, with one entity alone responsible for a third of all SSL certificates at its peak. When that CA was found to have mis-issued certificates, browser vendors revoked trust in its root keys. The CA sold its business. The market concentrated further.
Once trust arbitration consolidates, the arbiters become the infrastructure. Displacing them historically requires a crisis.
Credit rating agencies tell a complementary story about what happens when the entities being evaluated pay the evaluators. Three agencies control 95% of the rating business. The issuer-pay model, adopted in the early 1970s, created a conflict so deep that agencies were advising issuers on how to structure bonds to achieve maximum ratings. Conflicts of interest will emerge in agent trust infrastructure. They always do when evaluators and the evaluated share a commercial relationship. The real variable is whether the architecture being built now makes those conflicts structural or containable.
Decentralized alternatives exist on paper. The W3C published Verifiable Credentials 2.0 as a formal standard in May 2025. Researchers have prototyped systems where agents carry decentralized identifiers and cryptographic attestations, though the same work reveals limitations when an agent's language model is left in sole charge of the security procedures. These efforts are real, and NIST has begun its own AI Agent Standards Initiative. But they're at the "technically feasible" stage while the commercial implementations are completing transactions.
The window where this gets shaped is open now. Trust infrastructure for agent commerce is a governance question with commercial consequences measured in trillions, and it's being built by the organizations with the engineering resources and market position to ship first. That isn't inherently bad. Somebody has to build it. But the history of concentrated trust arbitration suggests a pattern: the entities that define trustworthiness become load-bearing walls in the market's architecture. Easy to build around. Very difficult to remove.
Which means the choices being made in the next twelve to eighteen months have an unusually long half-life. The architecture of agent trust, once established, will shape what kinds of commerce are possible and who gets access. That's worth understanding now, while the concrete is still wet.

