Differently Human

2025-11-27 par Horacio Gonzalez

The Future of Software Development

Rewriting the Role: Developers in the Age of LLMs: Part VI

Differently Human: The Future of Software Development

We began this series with a question about fear.

Every time a new abstraction arrives, someone declares the developer obsolete. From Fortran to Java, from punch cards to low-code tools, the pattern repeats. Each wave promises the end of our craft. Each wave ends up expanding it.

I started writing this series to address that fear. To tell experienced developers: we've survived this before, we'll survive it again.

But as I wrote, something shifted.

The historical pattern held. The reassurance was real. Developers will adapt, just as we always have.

What surprised me, what kept me up at night, wasn't the fate of those already in the field.

It was the next generation.

How do we help young people learn to code when the code itself can be generated? How do we teach judgment when students can skip straight to working solutions? How do we build the mental models that come from struggle when the struggle has been automated away?

That's not a question about tools or workflows. It's a question about education itself.

And it requires changes at every level: in how we teach, how we mentor, how we hire, and how we think about what it means to become a developer.


What we've learned

Over five posts, we've traced a path:

We looked back at the history of developer panic and found the same pattern: abstraction rises, panic follows, adaptation happens. From assembly programmers fearing Fortran to C developers dismissing Java, we've always feared that making things easier would make us obsolete.

We reframed the fear of deskilling. Automation doesn't erase skill, it shifts it. When compilers automated machine code, we learned algorithms. When LLMs automate boilerplate, we learn critical thinking. The craft moves up a level.

We examined new workflows: from syntax recall to intent articulation, from implementation to orchestration, from writing code to curating systems. The bottleneck has moved from "how do I write this?" to "what do I actually need?"

We faced hard truths about career growth: entry-level work is vanishing, not because we don't need developers, but because we don't know how to evaluate someone who's never built the basics from scratch. The work that used to prove you could code has been automated away.

And we reimagined education: teaching collaboration with AI instead of prohibition, grading process over output, designing friction on purpose to restore productive struggle.

The thread connecting all of this?

Change creates fear. Fear must be acknowledged. And change must be accompanied, at every level.

Not just for those already coding, but for those learning to.


What's actually changing

What's actually changing

I could paint you a picture of 2030. Tell you exactly what a junior developer's first week will look like, what tools they'll use, how AI will fit into their workflow.

But I'd be lying.

The truth is, I don't know. And neither does anyone else.

There's a real possibility we'll hit some form of singularity before 2030. Not necessarily AGI taking over, but a pace of change so rapid that society transforms faster than we can predict. The kind of scenario Leopold Aschenbrenner describes in his Situational Awareness piece, where the gap between technological capability and human adaptation becomes a chasm.

What I am fairly sure of: if humans are still working in 2030, there will be people "whispering to the ears of computers" to make them work as intended.

Maybe their job will look more like orchestrating a herd of AI developers than writing functions. Maybe it will be about architecture and coordination rather than implementation. Maybe the title won't even be "developer" anymore.

But it will be software development. The craft of turning human intent into computational reality.

The question isn't whether the role will exist. It's whether we'll prepare people for it fast enough.


The pace of change

Here's what worries me most: the gap between how fast things are changing and how fast we're adapting.

Educational institutions move on decade timescales. Curricula get revised every few years. Teaching methods evolve slowly, tested and refined over generations.

But AI capabilities are doubling every few months.

This isn't sustainable.

By the time a university updates its CS curriculum to account for LLMs, the models will have evolved three generations. By the time we standardize "best practices for AI-assisted development," those practices will be obsolete.

We can't plan for 2030. We need to prepare for continuous transformation.

That means:

  • For juniors: Learn to learn. The specific tools you master today will be outdated quickly. The ability to adapt, to interrogate new tools, to understand systems deeply enough to orchestrate them, that endures.

  • For seniors: Your role is no longer to be the source of stability and accumulated knowledge. It's to model adaptation. To show juniors that nobody has all the answers, and that's okay. To teach resilience in the face of constant change.

  • For education: Stop teaching "the right way to code in 2025" and start teaching "how to think about systems when the tools keep changing." Design curricula that can flex. Create assessments that measure adaptability, not just current knowledge.

The change rhythm will only accelerate. We need to be agile, not just in our development methodology, but in how we think about growth, learning, and the profession itself.

We need to accompany these changes when they arrive, not two or five years later.


What endures

In my teaching, there's a moment I watch for.

It's not when a student gets their code to compile. It's not when their tests pass. It's not even when they successfully prompt an LLM to generate exactly what they need.

It's the moment when they stop asking "how do I make this work?" and start asking "why does this work this way?"

That's understanding.

And understanding is what actually matters.

It's always been true in engineering, but with LLMs it becomes undeniable: machines will always beat us at memorizing, at recalling syntax, at brute-force computational power. They can hold entire codebases in context, retrieve any API documentation instantly, generate solutions faster than we can type.

But understanding? That's ours.

The ability to look at a system and see not just what it does, but why it was built that way. To sense when something feels wrong even if the tests pass. To know which trade-offs matter and which don't. To hold the full scope of a problem in your head, not as facts but as relationships.

This is what separates the developers I most respect from everyone else.

They understand. They have informed intuition. They see patterns that aren't in the documentation. They know when to trust the AI's output and when to dig deeper. They can explain not just what the code does, but what problem it solves and what problems it creates.

That kind of understanding doesn't come from memorization. It comes from struggle, curiosity, and time spent genuinely thinking about how systems work.

LLMs can generate code. They can't generate understanding. And understanding is what makes you valuable, whether you're working with punch cards or prompting the latest models.


What machines still can't do

What machines still can't do

Let's be honest: the list of "things only humans can do" gets shorter every year.

Five years ago, we said LLMs couldn't write coherent code. Now they can. Three years ago, we said they couldn't understand context. Now they do. Last year, we said they couldn't reason through complex problems. Now they're getting disturbingly good at it.

So what's actually left?

Accountability.

When an LLM generates a healthcare algorithm, who's responsible if it's biased? When it writes financial code, who answers when the audit fails? When it creates a system that affects real people's lives, who carries the weight of "did I do this right?"

The machine can produce. But it can't be held accountable.

It can't feel the professional responsibility of shipping code that matters. It can't experience the shame of breaking production, the pride of solving a hard problem elegantly, or the moral weight of building something that could help or harm.

It can't care.

And caring (about correctness, about users, about consequences) is what transforms code into craft.

This isn't a capability gap. It's a fundamental difference. LLMs don't have skin in the game. We do.

Understanding, judgment, ethics, responsibility: these aren't things we do better than machines. They're things machines fundamentally can't experience.

And that's not a limitation of current AI. It's a feature of being human.


What we do now

What we do now

So where does that leave us?

If you're an experienced developer:

Your role is changing from knowledge holder to sense-maker. The junior developers around you can generate code as fast as you can. What they can't do yet is know when it's right, when it's wrong, and why.

  • Make your reasoning visible. Don't just fix the bug; explain how you knew where to look.
  • Share your mental models. The patterns you see aren't obvious to people who haven't built them through years of mistakes.
  • Model adaptation. Show that you're still learning, still figuring out these tools, still sometimes wrong.
  • Teach judgment, not just technique.

If you're a junior developer or learning to code:

You have access to superpowers previous generations didn't. Use them. But don't let them replace the understanding that makes you valuable.

  • Generate code, then rebuild it manually to understand the choices made.
  • When AI gives you a solution, ask: what problem is this actually solving? What trade-offs did it make? What could go wrong?
  • Practice debugging things you didn't write. That's most of your career.
  • Build the mental models. When you don't understand something, research it. The temptation will be to ask AI for another explanation. Sometimes you need to struggle with the concept yourself.
  • Document your process. The lab notes approach from Part V isn't just for students; it's how you prove understanding.

If you're an educator:

The curriculum you perfected over the last decade might be obsolete in two years. That's not your failure, it's the reality.

  • Design for adaptability, not mastery of current tools.
  • Teach understanding over execution. Grade process over output.
  • Make AI use visible and structured, not hidden and shameful.
  • Share what works. We're all figuring this out together.
  • Accept that your role is changing too. You're not the source of knowledge anymore; you're the guide through uncertainty.

If you're a company hiring or managing developers:

Entry-level isn't what it used to be. The signals you relied on (can they build a CRUD app? implement a binary tree?) are now automatable.

  • Rethink onboarding. Juniors need to learn to evaluate and orchestrate, not just implement.
  • Value judgment and system sense, not just output.
  • Create environments where it's safe to say "I don't know."
  • Invest in mentorship. The knowledge transfer matters more than ever, even as the knowledge itself changes faster.

The specifics will keep changing. The tools will keep evolving. The job titles might shift.

But the fundamental work (turning human intent into systems that work) will remain.

The question is whether we'll adapt fast enough to prepare people for it.

Not in five years. Now.


Why this matters

I wrote this series because I kept hearing the same fear from different people.

Senior developers worried their expertise was becoming obsolete. Junior developers wondered if they'd picked the wrong career. Students questioned whether learning to code still made sense. Teachers felt lost, watching their carefully crafted curricula crumble. CTOs asked whether they should just hire fewer developers.

Different roles, different stakes, but the same underlying anxiety:

Software development is dead.

I don't believe it.

I'm a software developer. I teach part-time. I'm senior, but I work with juniors every day. I see the fear up close, from every angle.

And I see something else: we're making the same mistake we've always made.

We're conflating the tools with the work.

Software development has always been about one thing: negotiating with computers to make them do our will.

From flipping switches on ENIAC to writing assembly, from C to Java, from frameworks to cloud services, from Stack Overflow to LLMs; the tools change, the work doesn't.

We've always been translators between human intent and computational action.

Now, more literally than ever, we're having that conversation in natural language.

The machine understands more. It can do more. It makes more suggestions. But it still can't decide what should be built, why it matters, or whether it's right.

That's still us.

As long as humans need computers to do things, someone will need to bridge that gap. The tools will keep evolving. The role endures.

The question isn't whether we'll still be needed. It's whether we'll prepare the next generation fast enough.

Whether we'll adapt our teaching, our hiring, our mentorship, our craft itself to meet the pace of change.

Whether we'll focus on understanding over memorization, judgment over execution, responsibility over output.

We need to get this right.

Not because the machines are coming for our jobs. But because the world needs more people who can shape what those machines do.


The craft continues

On my first day as a programming intern in 1998, my manager handed me a stack of AWT component specifications and said: "Migrate these to Swing."

I spent weeks moving buttons, panels, and layouts. Tedious work. Boring work. But I learned how GUI frameworks actually worked, not from reading about them, but from touching every piece.

Today, an LLM could do that migration in minutes.

And that's fine.

Because the work was never about moving buttons. It was about understanding component lifecycles, event models, and how user interfaces hold together. The tedium was just the tuition.

Now the tuition looks different.

Students don't learn by migrating AWT to Swing anymore. They learn by interrogating AI-generated code, asking why it made certain choices, finding the bugs that elegant syntax can hide.

The questions change. The understanding doesn't.

The end is nigh... as always

In 1950, assembly programmers feared Fortran would make them obsolete. In 1990, C developers dismissed Java as a toy. In 2010, we worried low-code would replace us. In 2025, we fear LLMs.

Every generation of developers climbs a little higher, forgets a little of what the last one feared, and rediscovers what never changed:

The future of software development isn't less human. It's differently human.

Curiosity. Judgment. Care.

The future of software development isn't less human. It's differently human.

And that's something worth building.

Keep asking better questions. Keep teaching the next generation. Keep caring about what we build and why.

The machines will keep getting smarter. Our job is to stay thoughtful.

The craft continues.