
Will engineers still be needed?
Let’s be real – when OpenAI unleashed their AI model on November 30, 2022, just before the Advent of Code 2022 kicked off, I was immediately impressed. It could already churn out little code snippets, especially for those mind-numbing, repetitive tasks like string parsing. Honestly, I was pleasantly surprised by the initial capabilities – it felt like a genuinely helpful assistant. But I quickly realized it wasn’t a silver bullet. There were lots of mistakes in the generated code, and I saw it as a tool to augment my work, not replace it.
Fast forward a few years, and the landscape has shifted dramatically. We’re seeing a relentless stream of updates to Large Language Models (LLMs). Models like Deepseek are now outperforming ChatGPT in terms of size, Qwen distilled is competing with Llamacoder and Mistral, and Claude is dominating – all while big players like Google (with Gemma and Gemini – formerly Bard) and see the amount of open source models found on Hugging Face are pushing the boundaries. It’s clear that the quality of programming code generated by these models is constantly improving. And we haven´t seen the end of the improvements yet!
But the question remains: how long will it take developers to adapt and embrace these changes?
The truth is, our work has always been evolving. As far as I remember, the tech industry has always been in a constant state of flux. So, the idea that developers will suddenly need to become ‘broader engineers’ – that’s not entirely new. It’s just a continuation of a trend that’s been happening for decades. However..
Will Engineers be “replaced” by AI?
I doubt (ref: E.W. Dijkstra).
Especially if you see people who generated some nice successful software start asking for… actual developers, like this request I found on a discord channel of WindSurf IDE:

Ok, so you build a nice app using AI, but now the code is too complex to update and maintain it? Cause you build it only using.. an engine with prompts, and now you got stuck. Well ehm…..

The truth is: The LLMs are (at time of writing) still just parroting as best as they can, and that means the code that is spit out by the engines are not really always fit to be “maintainable”, “testable”, “changeable”, “refactorable”, etc. etc.
But the AI models are speedy (as computers should be) and getting better at coding at an incredible speed as well. As a software engineer am playing with it, using it, running some small LLMs locally to try things out like talking to documents using RAG, and do see that the models get better at interpreting our prompts.
Being able to generate “gists” of code, even generating the unit tests to verify the code generated does shift the way engineers should look at their job; it enables us to ask more questions and if you ask me it just makes us to get closer to our customers wants, instead of closer to the code. However, developers being totally replaced? Again, doubtful. How to explain this? Well, maybe an image can help.
So if we can use ChatGPT to generate code and images, why not explain the difficulties in future programming with a nice cartoonish image? Let’s try it out. Here goes. What follows is an excerpt from a chat with ChatGPT. To be fair: this is really a summary as it took some back and forth with the engine to get it draw the image that we wanted to get out of it, so some repetitive parts have been cut out. Of course, we start out with a very detailed prompt to ensure we get an image as close as possible what we have in mind.
Me:
Oh another subject: if we regards programming languages and layers of abstraction, like C is an abstraction over assembly, can we regard “prompting an LLM to get programming code” also a layer of set abstraction? ..
Show a programmer over the years as an advertorial, every twenty years programming against the next level of the language. Start in “programming cards”, then “assembly” then “C” and last the “llm promt”. So, these four steps should be resembled. There should be a header for the poster: “Development will be automated in the future”
Make a catchy poster explaining that fact..
ChatGPT:

Me:
It would be good to try a more cartoon style look. While maintaining the different aesthetics belonging to each era.
ChatGPT:

Me:
Adjustment: the “assembly” part could be more relaxed and titled “1980 Assembly C”. The 2000 can titled “”2000 RAD Studio”. The last one should show the developer sweating over to get the LLM Prompt exactly right to generate the code the developer actually wants to have.
ChatGPT:
Good adjustments! Here’s how I’ll refine the image:
- 1980 – “Assembly C”
- Adjust the title to “1980 Assembly C”.
- Make the programmer look more relaxed, enjoying low-level coding instead of appearing overly stressed.
- 2000 – “RAD Studio”
- Title update to “2000 RAD Studio” to represent the rapid application development era.
- The environment can be more streamlined and GUI-heavy, showing a developer working efficiently with tools like drag-and-drop UI builders.
- Future – LLM Prompting (Struggles!)
- The programmer is sweating and frustrated, trying to fine-tune their prompt to generate exactly the right code.
- The LLM interface could show “error messages” or “unexpected results”, reinforcing the idea that automation isn’t perfect yet.
Me:
The idea is indeed to show that in the end the human developer is not gone. Hence the title of the cartoon being a bit ironic. When you agree Please generate the image. Otherwise ask a clarifying question
ChatGPT:
I completely agree! The irony in “Development Will Be Automated in the Future” is key—while automation progresses, the human developer is still struggling to get things right, proving that manual effort and expertise remain essential.

Me:
Can you add a (c)M.Koning to the bottom right of the image and change the characters to be female? Another adjustment for the image to be generated: for the last image of four, can we have “Vibe coding” instead of the term “LLM prompt”.

Me:
The image unfortunately contains the subimage for “2000” twice, and is missing the “1980” era.
Please update the image including the following update: the years should show 1965, 1985, 2005, 2025.
First, describe the image again in full, after that ask for confirmation before generating the image.
ChatGPT (cut the description)

Me:
Thanks! That one really captures the Vibe! Please provide a small summary of the fact that engineers will still be necessary, even when we can use LLMs to help us write while coding. Explain why in future maintenance of code could be a harder task then it is when only humans would originally have written the code.
ChatGPT:
Even with powerful LLMs assisting in writing code, engineers will remain essential—not just for technical execution, but for critical thinking, system design, debugging, and ethical judgment. LLMs generate code based on patterns, but they don’t fully understand context, business logic, or long-term maintainability the way humans do.
Why future code maintenance could be harder:
- Loss of human reasoning:
When code is generated by an LLM, the rationale behind certain decisions may not be clear—especially if prompts were vague or inconsistent. Future developers might struggle to understand why something was done a certain way. - Non-standard code patterns:
LLMs might produce creative but unconventional solutions. This could lead to code that’s harder to read, test, or integrate, especially in large teams or legacy systems. - Prompt-dependence drift:
Small changes in prompt wording can result in vastly different code. That inconsistency can create long-term unpredictability in how a system evolves. - Error amplification:
LLMs can confidently produce subtly broken or insecure code. If not reviewed closely, these flaws compound, making future debugging and refactoring more difficult. - Documentation gaps:
Unless explicitly prompted, LLMs won’t always generate thorough documentation. This makes onboarding or modifying systems harder over time.
In short:
LLMs are powerful tools, but like any automation, they still need human oversight to ensure systems are coherent, maintainable, and robust over the long haul. Developers won’t become obsolete—they’ll just evolve into curators, strategists, and explainers of increasingly machine-generated systems.
End of chat.
Developing, coding, refactoring, playing with the tools. Fun times ahead! And, what do you think? Feel free to leave a comment below!
References
A summary of links.
E.W. Dijkstra: On the foolishness of “natural language programming”.
Using RAG with AI (text in Dutch, but you can use your preferred browser translate of course)
Advent of code 2022: https://adventofcode.com/2022 and https://github.com/MelleKoning/adventofcode
Martin Fowler (april 2023): An example of LLM Prompting for programming
Vibe coding term coined: Vibe coding on wikipedia
Running AI models locally using openwebui and ollama via docker: https://github.com/MelleKoning/aifun/
Windsurf IDE (an IDE geared to the use of AI for coding tasks): https://windsurf.com/editor
Comparing models for coding tasks
—
Leave a Reply