edited by chatGPT
DUNE
In the DUNE universe—tens of thousands of years before the events of the books—humanity fought a Terminator/Matrix-like war against robots and machines. After their victory, “thinking machines” were outlawed for two reasons:
- They had enslaved parts of humanity.
- They had taken away the most essential human trait—the ability to think and reason for ourselves.
Back to Reality
Fast forward to today: something eerily similar feels like it’s happening. Humanity has created large language models (LLMs), and they’re getting better fast. When ChatGPT first appeared, it was decent at conversation, okay at generating images, but not very good at fantasy writing or code. Now, after years of training and development, it’s clear those weaknesses are shrinking.
For example, I recently asked GitHub Copilot Agent to build me a simple Pokémon API application. I’d tried something similar with Claude before, but the results were disappointing. This time, Copilot Agent nailed it—so well, in fact, that it left me a little uneasy about the future of software development. Of course, Pokémon APIs are extremely well-documented, with countless student projects on GitHub, so it had a clear advantage in that case.
Are We Screwed?
That seems to be the key: if a task is well-defined and well-documented, AI performs impressively. But when the task is open-ended, creative, or ambiguous, AI still tends to produce… slop. Examples of this include AI generated videos, AI assisted video games development, creative writing (this is why AI novels seem super generic and fan-fic-esque).
In conclusion: LLMs excel at structured, well-defined work—but they still fall flat when it comes to true creativity. And that’s why there will always be creative work left for humans.