When Tools Learn, So Must We

2025-11-10 par Horacio Gonzalez

Deskilling or Reskilling in the Age of AI

Rewriting the Role: Developers in the Age of LLMs – Part II

Programmers Are Always Doomed... Until They’re Not

In the first part of this series, I looked back at the many times developers were declared obsolete. From Fortran to Java, from punched cards to low-code tools, every new abstraction seemed ready to kill programming, and every time we just moved one level up.

But this new wave feels different, doesn’t it?

Large Language Models can write real code. Not snippets, not templates. Entire functions, tests, even PRs that pass CI. When you see that, it’s hard not to wonder: if the machine can do this, what’s left for us?

That question usually hides another one:

Are we being deskilled?


The deskilling fear

The word deskilling comes from labor economics. It describes what happens when machines or processes remove the need for workers to use judgment and expertise, when a craft becomes button-pushing.

It’s a powerful image, and an easy one to apply to AI: “The model writes the code, so the programmer becomes a reviewer, a human safety net.”

The deskilling fear

But as Kwame Anthony Appiah wrote in The Atlantic (“The Age of De-Skilling – Will AI stretch our minds—or stunt them?,” October 2025*),* that framing misses something important. Automation doesn’t erase skill. It moves it.

“AI doesn’t deskill people. It shifts expertise to places the tools can’t reach.” — The Atlantic, 2025


From repetition to reasoning

Every wave of automation in software has done the same thing:

  • Compilers automated machine code, so we learned algorithms.
  • IDEs automated typing and syntax, so we learned architecture.
  • The cloud automated deployment, so we learned systems design.
  • LLMs automate boilerplate, so we must learn critical thinking.

When the tools get smarter, our value doesn’t disappear, it migrates from execution to interpretation.

From repetition to reasoning

The hard part of programming has never been typing. It’s understanding the problem, structuring the system, validating assumptions, and debugging what you didn’t expect. LLMs don’t change that; they just change which steps we spend our time on.


What we’re really learning now

Using AI to code is teaching us new literacies:

  • Framing: describing intent clearly enough that a model can follow it.
  • Reading critically: recognizing when generated code is elegant but wrong.
  • Debugging abstractions: tracing errors through layers you didn’t personally write.
  • Ethics and trust: deciding when not to automate.

What we’re really learning now

That last one might be the most human skill of all.


From “lost skills” to new ones

The real danger isn’t forgetting how to write a for loop, it’s forgetting how to think about one. If you treat the model as an oracle, you’ll stop building the mental model that lets you reason about your own systems. But if you treat it as a partner, you’ll discover new forms of mastery.

The historian of technology David Noble once said:

“Every machine embodies a social decision: about what gets automated, and what remains a skill.”

Our decision, as developers, is whether we hand over thinking, or use automation to think deeper.


The craft endures

The craft endures

I don’t believe AI is deskilling us. It’s forcing us to rediscover what skill really means.

The craft of software has always been to translate messy human intent into precise computational action. Now we’re just doing it through a new medium: language.

When the tools learn, so must we. Not to compete with them, but to guide them. Not to defend what used to make us valuable, but to explore what will.

Because the next generation of developers won’t be less skilled. They’ll just be skilled at different things.


Coming next

In the next post, I’ll look at how this transformation is reshaping our roles, especially the gap between junior and senior developers, and how we can help new programmers grow in an age where code almost writes itself.