Tomás Ferreira is not a real person. He's a composite, assembled from practitioner interviews, employment data, and the lived details that surface when you read enough first-person accounts of engineers navigating the agent transition. We gave him a name, a backstory, and a coffee order (cortado, oat milk, no sugar) because structural shifts are easier to think about when they have a face. Everything he describes is grounded in documented trends. The opinions are his own, insofar as a fictional person can have opinions.
Eighteen months ago, a mid-size healthtech company in Denver had engineering teams that looked like engineering teams everywhere: a lead, a couple of senior devs, two or three juniors, a shared Slack channel full of PR reviews and arguments about naming conventions. Today that same company runs what McKinsey calls a "two-shift digital factory," humans on days, agents on nights.1 The team Tomás Ferreira led has gone from five people to two people and a swarm of autonomous coding agents that complete roughly twenty actions before requiring human input.2
Ferreira, 38, spent his first four years in software doing QA. He moved into development, then team leadership. Now he does something that doesn't have a job title yet.
We spoke over video on a Tuesday morning. He'd already been through his daily sprint review, a ritual that used to happen every two weeks and now happens before 9 AM.1
You started in QA. Does that feel relevant now?
Tomás: Oh, it's the funniest thing. I spent years trying to escape QA. Years. I wanted to be the person building things, not the person poking holes in what other people built. I got there. I was a real developer. I led a team. And now my entire job is basically QA again. Except the "other people" are agents, and there's nobody to argue with when they make a weird architectural choice. You just sit there, alone with the pull request, wondering why it chose that abstraction.
Walk me through a morning.
Tomás: I get in, and there's this pile. That's the only word for it. Overnight the agents have been running: refining features, writing tests, flagging risks. By the time I sit down there are pull requests waiting, sometimes dozens. Each one looks clean. Tests pass. Formatting is correct.
And my job is to figure out which ones are actually fine and which ones are fine the way a student's essay is fine when they clearly understood the assignment but missed the point entirely.
The hard part, and I don't think people outside this work get this, is that you're reconstructing intent you didn't set.3 When I reviewed my team's code, I knew what they were trying to do because we'd talked about it. I knew Priya's tendency to over-engineer auth flows. I knew James would forget edge cases on Fridays. The code had a personality. Agent output has a surface. And the surface is always confident.
The Anthropic report says engineers use AI in about 60% of their work but can only fully delegate maybe 0 to 20% of tasks. Does that match?2
Tomás: Yeah. And the gap between those numbers is where my actual job lives. The 60% is the easy part: you point the agent at something, it does a version of it, you clean it up. Delegation is where it gets interesting, because delegation requires you to have already done the thinking. You need to know what "done" looks like before you hand anything off. You need to specify constraints the agent won't infer. Don't break this API. Keep latency under 200ms. Don't introduce a new dependency just because you found a clever one.4
That specification work? That's the engineering now. The code is a byproduct.
Do you think of what you do as managing?
Tomás: I manage two humans still, and that part feels normal. But the agent side... managing implies you're shaping behavior over time. Developing someone. Agents don't develop. They execute, then they execute again from zero. No institutional memory. Every session starts fresh, like a very productive amnesiac.
Someone at MIT nailed it: agents are "owned like assets but act in ways that require oversight, akin to employees."5 I own them like tools but I'm accountable for them like they're people. Except I can't fire them, can't promote them, and they never learn from last Tuesday.
So what is the job?
Tomás: I decompose problems into pieces an agent can handle reliably. I set up constraints so the output is verifiable. I review what comes back. And I make judgment calls about stuff that's technically correct but wrong in ways only someone who's been in this codebase for three years would catch.
That last part is the whole job. Everything else is scaffolding.
One engineer described this as getting paid for "the last 10%."6
Tomás: That's generous. Some days it feels like 3%.
What do you miss?
Tomás: (long pause)
I miss being confused. That sounds stupid. But when I was writing code, I'd hit walls. I'd try something, it wouldn't work, I'd try something else. That process of being stuck and unsticking yourself is how you learn the system. It's how you build the intuition that lets you look at agent output now and say "something's off here."
And I worry about that, because I'm spending down a bank account I'm not depositing into anymore.
I also miss my team. I had five people. Two of the juniors were let go about a year ago. One senior left on her own, said she didn't recognize the job anymore.7 The two people I still work with are great, but we're all reviewers now. We don't build together the same way. The arguments about naming conventions? I actually miss those. They were annoying but they meant we cared about the same thing.
Pull requests per author are up 20%, but incidents per PR are up almost 24%.8 Are you seeing that?
Tomás: Yes. And it's the thing that keeps me up. More output, more bugs per unit of output. The agents are fast and prolific and they produce code that looks right. The failure mode isn't obvious errors. It's subtle ones. Missing edge cases. Business logic that's statistically inferred instead of actually understood.9
I catch most of it. But "most" is doing a lot of work in that sentence, and the volume is only going up.
If the AI tools disappeared tomorrow, how would you feel?
Tomás: (laughs) I read that interview where the engineer said "relief."6 I get it. I wouldn't go that far. But I'd feel something like... recognition? Like, oh right, this is what the job was. This is the thing I trained for.
But they're not disappearing. So the real question is whether I'm doing something that matters or just performing oversight while the system slowly outgrows my ability to check it. I honestly don't know. Ask me in eighteen months. If I'm still here.
Will you be?
Tomás: I'm in QA again. I always survive. Nobody wants the job.
Footnotes
-
McKinsey, "The AI Revolution in Software Development," April 2026. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-ai-revolution-in-software-development ↩ ↩2
-
Anthropic, 2026 Agentic Coding Trends Report. https://resources.anthropic.com/hubfs/2026%20Agentic%20Coding%20Trends%20Report.pdf ↩ ↩2
-
Atomic Robot, "AI Writes Better Code. We're Getting Worse at Reviewing It," February 25, 2026. https://atomicrobot.com/blog/ai-review-fatigue/ ↩
-
Pathmode.io, "Orchestration Without Intent Is Just Expensive Guessing," March 11, 2026. https://pathmode.io/blog/orchestration-era-needs-intent ↩
-
MIT Sloan Management Review, "The Emerging Agentic Enterprise," 2026. ↩
-
Swarmia, "Is software engineering still a craft?" February 12, 2026. https://www.swarmia.com/blog/is-software-engineering-still-craft/ ↩ ↩2
-
Pattern documented across multiple practitioner accounts; see CNN Business, April 8, 2026. https://www.cnn.com/2026/04/08/tech/ai-software-developer-jobs ↩
-
CodeRabbit, AI vs Human Code Generation Report, 2026. https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report ↩
-
Addy Osmani, "Code Review in the Age of AI," January 5, 2026. https://addyo.substack.com/p/code-review-in-the-age-of-ai ↩
