Two years ago, an entry-level developer at IBM spent roughly 34 hours a week coding. The same role now involves less coding and more time with marketing teams, clients, and product roadmaps. The title hasn't changed much. The job has.
IBM announced in February that it would triple its entry-level U.S. hiring in 2026, running against the industry's quiet thinning of junior ranks. The logic, as IBM's CHRO Nickle LaMoreaux put it:
"The entry-level jobs that you had two to three years ago, AI can do most of them. You have to rewrite every job."
She did. The new descriptions shift toward customer engagement, cross-functional collaboration, and what she calls "people-forward" work. Assessing machine-generated output. Spotting weak reasoning. Stepping in when automated systems fall short.
The pipeline argument is clean. Cut too many juniors now, LaMoreaux told attendees at Charter's Leading with AI Summit, and you starve the pipeline of future senior talent. Invest in entry-level hiring today, build the leadership bench for 2030. Straightforward enough.
Whether the new pipeline produces the same thing the old one did is a different matter.
The old version worked through friction. You learned to code by coding badly for years. You developed judgment about systems by building systems that broke. The tedious maintenance, the long hours inside legacy code: those were exactly the conditions under which expertise formed. And the crucial thing about that process was failure. You made mistakes. You recovered from them. Over time, the accumulation of your own recoverable errors built a disposition, a feel for when something was off, something review never quite teaches.
IBM's redesigned roles swap that friction for oversight. Junior employees review agent output, validate decisions, flag anomalies. The talent acquisition team emphasizes analysis, problem-solving, and effective AI use over narrow technical execution. In some ways, these sound like better jobs. Broader. More human. Less grinding. That appeal is real, and worth sitting with, because it makes the harder question easy to skip past.
When METR ran a randomized controlled trial with experienced open-source developers, they found AI tools made participants 19% slower on familiar codebases. The developers estimated they'd been 20% faster. A near-perfect inversion between perception and reality. And the data showed AI was least effective precisely when developers had deep prior exposure to the code. Seasoned engineers with years of depth in their own repositories couldn't accurately gauge what AI was doing to their work. So what happens to someone who never built that depth in the first place? Someone whose entire formative experience is reviewing output, who has never recovered from their own mistakes?
IBM is placing a real bet, backed by real hiring dollars, that oversight is a viable path to what LaMoreaux calls "more durable skills." Maybe it is. The honest answer is that nobody knows yet. The old pipeline took a decade to produce a senior engineer. The new one has been running for about eighteen months. IBM is building the entry point. Whether watching someone else's work, or something else's work, builds the same knowing as doing it yourself won't resolve on anyone's quarterly timeline.

