If you don’t know k8s, or any tech really, you can RTFM, you can generate or apply some premade manifests, you can feed the errors into the LLM and ask about it, you can google the error message, you can do a lot of things. Often times, in the “real world” of software engineering, you learn by having zero idea of how to do something to start with and gradually come up with ideas from screwing around with a particular tool or prototyping a solution and seeing how well it works.
I agree that some of the above basically amounts to: it’s easier to learn new things. Which itself might sound ho-hum, but it really is a fundamental responsibility of software engineers to learn new things, understand new and complex problems, and learn how to do it correctly and repeatable. LLMs unquestionably help with this, even with their tendency to hallucinate: usually proof by contradiction (or the failure of an over-confident chaos machine) is even better than just having a thing that spits out perfect solutions without needing the operator to understand it.
However, I will say that there is a very large gulf between learning how to reason about complex systems or code and learning how to use the entropy machine to produce nominally acceptable work. Pure reliance and delegation of responsibility to the AI will torpedo a lot of projects that a good engineer could solve, and no amount of lines of code makes up for a poorly conceived product or a brittle implementation that the LLM later stumbles over. Good engineering principles are more important than ever, and the developer has to force the LLM to conform to those.
There are many things to question about agentic coding: whether it’s truly cost/effort effective, whether it saves time, whether it makes you worse at problem solving by handing you facile half-solutions that wither in the face of the chaos of the real world, etc. But they clearly aren’t a technology which “doesn’t do ANYTHING useful”, as some HN posters claim.
I agree that some of the above basically amounts to: it’s easier to learn new things. Which itself might sound ho-hum, but it really is a fundamental responsibility of software engineers to learn new things, understand new and complex problems, and learn how to do it correctly and repeatable. LLMs unquestionably help with this, even with their tendency to hallucinate: usually proof by contradiction (or the failure of an over-confident chaos machine) is even better than just having a thing that spits out perfect solutions without needing the operator to understand it.
However, I will say that there is a very large gulf between learning how to reason about complex systems or code and learning how to use the entropy machine to produce nominally acceptable work. Pure reliance and delegation of responsibility to the AI will torpedo a lot of projects that a good engineer could solve, and no amount of lines of code makes up for a poorly conceived product or a brittle implementation that the LLM later stumbles over. Good engineering principles are more important than ever, and the developer has to force the LLM to conform to those.
There are many things to question about agentic coding: whether it’s truly cost/effort effective, whether it saves time, whether it makes you worse at problem solving by handing you facile half-solutions that wither in the face of the chaos of the real world, etc. But they clearly aren’t a technology which “doesn’t do ANYTHING useful”, as some HN posters claim.