OpenAI's Frontier platform launched this month with Forward Deployed Engineers who embed with enterprise customers to get agents into production. These technical experts work alongside teams through deployment, staying until systems operate reliably. The requirement points to something about production knowledge: it hasn't consolidated into platforms yet.
Salesforce committed to building 1,000 FDEs, with job postings soaring 800% between January and September 2025. Deployment at scale requires embedding humans with every customer because the infrastructure lacks the self-service capabilities teams need. The FDE model addresses this gap through human expertise. What makes this knowledge hard to productize—and what does the FDE model cost platforms betting on eventual consolidation?
For web automation specifically, production edge cases only emerge at scale across heterogeneous sites. An authentication flow works for 99 hotel chains, then fails on the 100th because their session management differs in ways you couldn't anticipate. Bot detection triggers differently when 5,000 sessions hit a site from US IPs versus 5,000 from European IPs. The detection logic stays the same, yet CDN routing patterns create timing signatures that look suspicious. Data extraction patterns handle most e-commerce sites, then break on regional variations where product catalogs structure differently.
FDEs discover these patterns through deployment work. Which site behaviors recur versus which are truly idiosyncratic. Which authentication sequences fragment by geography and which remain stable. Which failure signatures indicate temporary issues versus architectural limits. This operational knowledge—which CDN behaviors break authentication flows, which regional variations require special handling, which site structure changes cascade into extraction failures—feeds back into platform development.
"FDEs share field feedback with Research and Product to influence model and product development"
Platforms need field deployment to replicate this data collection loop. Internal testing alone misses the heterogeneity of production environments.
Companies deploying more FDEs accumulate more production knowledge faster. If OpenAI's FDEs discover that certain Japanese hotel chains require residential proxy rotation while European chains tolerate datacenter IPs, that pattern becomes platform intelligence competitors don't have. The operational knowledge compounds: each deployment surfaces edge cases that, once encoded, make subsequent deployments more reliable. Platforms with larger FDE teams could pull ahead through superior understanding of how the web actually behaves at production scale.
The economics suggest platforms are subsidizing deployment to accelerate this knowledge accumulation. FDE salaries run $120,000-$180,000 base plus equity, and embedding experts with every enterprise customer doesn't scale linearly. If the field data FDEs collect eventually consolidates into self-service infrastructure, the investment pays off through reduced deployment friction. The break-even calculation depends on how quickly operational patterns can be encoded—and whether web automation's adversarial environment allows patterns to stabilize at all.
The 11% of organizations with agents in production versus 38% running pilots reflects this gap: moving from controlled testing to production scale surfaces edge cases that require operational knowledge platforms don't yet provide.
"FDEs codify working patterns into tools, playbooks, or building blocks others can use"
Deployment work becomes discovery work—identifying what needs to become platform features.
Whether that knowledge can fully consolidate remains open. Web surfaces change constantly. Bot detection evolves in response to automation patterns. Authentication mechanisms fragment by geography and regulatory environment. The operational knowledge required for reliable deployment might always demand humans who can rapidly comprehend new edge cases. The FDE model works. Its necessity shows how much production knowledge still needs to be discovered through field deployment—and whether that discovery process ever completes.
Things to follow up on...
-
Palantir's pioneering model: The company had more FDEs than software engineers until 2016, establishing the deployment pattern that OpenAI and others now follow.
-
Web workflow fragmentation: Knowledge workers routinely navigate eight to twelve different web applications during standard workflows, with 25-30% of time consumed by data entry and validation tasks.
-
Authentication complexity at scale: Web automation requires handling various API protocols and authentication mechanisms including OAuth, API Keys, and SAML across diverse data formats and documentation levels.
-
The production deployment gap: Despite widespread interest, only 11% of organizations have AI agents in production while 38% run pilots and 35% have no agentic strategy.

