MuleSoft launched Agent Scanners in January 2026 to help enterprises answer what should be a straightforward question: what agents do we actually have running in production, consuming resources, accessing data sources, handling live workflows?
Agent Scanners is infrastructure to solve a visibility problem acute enough that major vendors are building products around it. When you need dedicated scanning tools just to see what you've deployed, the deployment wave has outpaced operational infrastructure.
While 79% of enterprises report adopting AI agents, only 23% are actively scaling them—revealing a gap between deployment capability and operational readiness.
Adoption metrics tell the story. While 79% of enterprises have adopted AI agents, only 23% are actively scaling them. Salesforce's Gavin Baker predicts hundreds of agents per employee in 2026, most sitting idle:
"impressive but invisible."
Deployment capability has outpaced operational use.
For web agents, this visibility gap compounds. When an agent monitoring hotel inventory has learned to handle authentication across fifty regional variations of the same booking platform, that knowledge lives in configuration files, learned behaviors, and monitoring patterns. The agent knows which session management approaches work, which rate limiting strategies to apply, which bot detection patterns to expect. If another team can't discover that agent exists, they're rebuilding both functionality and the infrastructure solutions the web actively resists solving.
Agents get deployed across different cloud environments, different models and data sources, isolated team silos without cross-visibility. Manual tracking fails quickly. Registries become stale spreadsheets that can't track what's actually running. Teams build agents in isolation, duplicating effort because they don't know what exists elsewhere internally. The infrastructure investment in making web agents reliable (handling authentication failures, managing session state across thousands of concurrent browser instances, coordinating responses when sites change their bot detection) only compounds if that investment is discoverable.
MuleSoft's timing matters. Agent Scanners arrives for organizations that have deployed enough agents that visibility has become the bottleneck. Organizations face management challenges after accumulating substantial deployments.
Agent infrastructure needs discovery built in from the start, rather than bolted on through governance frameworks later. When web agents monitor thousands of sites and codify what they learn from site structure changes, that accumulated infrastructure knowledge needs to be discoverable by default. Discovery means making visible what agents do and how they achieve reliability: which authentication strategies work, which site structures they've learned to navigate, which bot detection patterns they handle. When that infrastructure knowledge is discoverable, the next deployment builds on proven patterns instead of starting from scratch.
Salesforce's predicted utilization gap traces back to whether deployed agents can be found by the people who need them, and whether the infrastructure knowledge they've accumulated is accessible to teams solving similar problems. MuleSoft building dedicated tooling for this shows where the bottleneck sits: making deployed infrastructure visible and reusable in production workflows.
Things to follow up on...
-
The pilot-to-production gap: MIT's research found that 95% of AI pilots fail to reach production or deliver measurable P&L impact, with deployment funnels showing only 5% moving from evaluation to production.
-
Shadow AI usage patterns: While only 40% of companies purchased official LLM subscriptions, workers from over 90% of companies reported regular use of personal AI tools for work, creating an unmeasured governance challenge.
-
Infrastructure cost multipliers: Hidden costs often equal or exceed platform subscription fees, with organizations needing to budget 50-100% additional costs beyond basic platform pricing for realistic implementation planning.
-
Data quality bottlenecks: A mere 12% of organizations report data of sufficient quality and accessibility for AI, with 70-85% of AI project failures stemming directly from data architecture issues rather than model capabilities.

