Gil Fontenot is not a real person. He is, however, a composite of every enterprise product designer who has ever been told to "make it pop" in a demo and "make it disappear" in production, in the same product review meeting. We spoke over video. He had opinions.
Twelve years ago, the hardest UX problem in enterprise software was getting people to use the thing at all. Adoption meant overcoming inertia, bad onboarding, and the gravitational pull of spreadsheets. That problem hasn't gone away, but it's been joined by something weirder: what happens when the software does things on its own, and you have to decide how much of that to show?
The agent era has surfaced a design contradiction that most enterprise software companies are currently pretending doesn't exist. Agents need to look impressive enough to survive a 45-minute sales demo. Autonomous, capable, a little magical. But in daily use, the same agent needs to be quiet enough that people forget it's running. These aren't opposite ends of a dial you can tune. They're two different products wearing the same logo.
Gil Fontenot has been designing enterprise software long enough to have survived the "big data dashboard" era, the "everything needs a chatbot" era, and now what he calls "the era of the confident ghost." Six months ago, he was promoted to lead the AI Experience team at a large workflow automation company. He describes the role as "being handed a grenade and a promotion simultaneously."
We talked about what it's like to design something that's supposed to be both spectacular and invisible.
You've described your current mandate as contradictory. What does that actually look like on a Tuesday?
Gil: I have two Figma files open at all times. One is the demo flow. One is the production flow. They are, and I want to be precise here, almost entirely different products.
The demo flow shows the agent completing a seven-step procurement workflow in about twelve seconds. It's beautiful. It's fast. The VP of Sales loves it. The production flow has confirmation dialogs, reasoning panels, confidence indicators, escalation paths, audit logging. It looks like a tax return. Nobody has ever called a tax return beautiful, but it's the thing that actually works.
I toggle between them all day. It's like being a playwright who also writes the safety manual for the theater.
The reasoning chain question seems central. Do you show users what the agent is thinking, or just the result?
Gil: This is where I lose sleep. There's a pattern called Explainable Rationale, where the agent proactively tells you why it did what it did, before you even ask.1 In production, this is essential. Without it, users interpret perfectly valid autonomous actions as bugs. They can't build a mental model. But in a demo? You show the reasoning chain and suddenly the magic trick has a visible wire. The audience stops seeing capability and starts seeing mechanism.
There's a latency problem too. The ReAct pattern, where the agent articulates its reasoning before each step, adds an LLM call per cycle.1 Demos optimize for speed. Production optimizes for auditability. You literally cannot have both at once. So I maintain two versions of the same interaction and try not to think too hard about what that says about my integrity.
What about confidence indicators? Showing the agent's uncertainty?
Gil: Yeah. Try putting a "73% confident" badge on a screen in front of a prospect who's evaluating three vendors. The other two vendors' agents are 100% confident because they don't show the number. You've just made honesty a competitive disadvantage.
Meanwhile, Bessemer found that 78% of AI failures are invisible. The "confidence trap," they call it. AI is confidently wrong and the user just... accepts it.2 So the design choice to hide uncertainty is exactly the design choice that enables that failure mode. I know this. My sales team also knows we need to close Q2. We have very productive conversations where we both leave unhappy.
Let's talk about identity. Does your agent have a name?
Gil: We went back and forth on this for three months. Three months! On a name. The demo team wanted a persona. Something like "meet your AI teammate, Nova." Avatar, personality, the whole thing. And I get it. It demos incredibly well. You say "Nova, handle the vendor onboarding for Acme Corp" and it feels like the future arrived early.
But there's this question from the Gensler workplace research that I keep returning to: is your AI teammate portable, moving between meetings and conversations like a colleague, or does it live in the project room, keeping track of context like a shared whiteboard?3 Those are genuinely different interface models. The named portable teammate has a high trust ceiling but also high uncanny valley risk. The anonymous ambient capability, just a feature of the platform, adopts faster because there's no personality to misread.
We went anonymous. Sales was not thrilled. Sales is frequently not thrilled with me.
The Gensler survey found that AI power users actually spend less time working alone and more time learning and socializing.3 Does that change how you think about invisible design?
Gil: That finding rewired something for me. Because invisibility isn't minimalism as an aesthetic flex. The real question is: when the agent disappears, what becomes more visible? Ideally, human attention. Human judgment. Time for the harder thinking. The agent handles the procurement workflow so you can spend that hour mentoring a junior analyst. That's the real product. The agent is just negative space.
But there's research showing early AI adopters feel less productive even when output rises.4 You replace a bunch of small completable tasks with fewer, harder, less measurable ones. Easy tasks meant easy wins. So I've accidentally designed away people's sense of accomplishment. I'm sitting in a design review thinking, "We shipped a feature that makes people's work better and their day worse."
Deloitte coined "workslop" for when agents make work worse. Have you seen that?
Gil: I've built that. We shipped an agent last year that autonomously organized project files. Sounded great. In practice, it reorganized things according to its own logic, and people couldn't find anything. Agentic sludge, friction disguised as automation.5 The demo was flawless, by the way. Standing ovation. Actual applause from people who would never have to use it.
So what wins? The impressive agent or the invisible one?
Gil: I think the companies that win long-term are the ones whose demos look a little boring. Where the sales engineer has to say "I know this doesn't look like much, but let me show you the audit trail." Where the confidence meter is right there on screen, and the prospect thinks huh, that's honest.
The problem is that requires a buyer who's been burned before. Most buyers haven't been burned yet. They're still in the magic trick phase. So the market is selecting for impressive over trustworthy, and that selection pressure shapes what designers like me actually build. Every week I'm making small concessions to the demo, and every concession makes the production product a little less honest.
Someone said "transparency is the new polish."6 I believe that. I just can't prove it in a quarterly business review. So I keep my two Figma files open and wait for the correction that'll make the boring version the right sales pitch too.
Footnotes
-
Smashing Magazine, "Designing For Agentic AI: Practical UX Patterns For Control, Consent, And Accountability," February 2026. https://www.smashingmagazine.com/2026/02/designing-agentic-ai-practical-ux-patterns/ ↩ ↩2
-
Bessemer Venture Partners, "The Confidence Trap," March 2026. Via editorial domain research. ↩
-
Gensler Research Institute, Global Workplace Survey 2026. https://www.gensler.com/gri/global-workplace-survey-2026 ↩ ↩2
-
World Economic Forum research on early AI adopter psychological effects. Via editorial domain research. ↩
-
Deloitte, "Workslop" concept and agentic sludge framework. Via editorial domain research. ↩
-
UX Pilot, "9 Mobile App Design Trends for 2026." https://uxpilot.ai/blogs/mobile-app-design-trends ↩
