I don’t get this at all. I’m using LLM’s all day and I’m constantly having to make smart architectural choices that other less experienced devs won’t be making. Are you just prompting and going with whatever the initial output is, letting the LLM make decisions? Every moderately sized task should start with a plan, I can spend hours planning, going off and thinking, coming back to the plan and adding/changing things, etc. Sometimes it will be days before I tell the LLM to “go”. I’m also constantly optimising the context available to the LLM, and making more specific skills to improve results. It’s very clear to me that knowledge and effort is still crucial to good long term output… Not everyone will get the same results, in fact everyone is NOT getting the same results, you can see this by reading the wildly different feedback on HN. To some LLM’s are a force multiplier while others claim they can’t get a single piece of decent output…
I think the way you’re using these tools that makes you feel this way is a choice. You’re choosing to not be in control and do as little as possible.
Once you start using it intelligently, the results can be really satisfying and helpful.
People complaining about 1000 lines of codes being generated? Ask it to generate functions one at a time and make small implementations.
People complaining about having to run a linter? Ask it to automatically run it after each code execution.
People complaining about losing track? Have it log every modifications in a file.
I think you get my point. You need to treat it as a super powerful tool that can do so many things that you have to guide it if you want to have a result that conforms to what you have in mind.
It's not that challenging, the answer is, it depends.
It's like a junior dev writing features for a product everyday vs a principle engineer. The junior might be adding a feature with O(n^2) performance while principle has seen this before and writes it O(log n).
If the feature never reaches significance, the "better" solution doesn't matter, but it might!
The principle may write once and it is solid and never touched, but the junior might be good enough to never need coming back to, same with a llm and the right operator.
There's that, but I actually think LLMs are becoming very good at not making the bad simple choice.
What they're worse at is the bits I can't easily see.
An example is that I recently was working on a project building a library with Claude. The code in pieces all looked excellent.
When I wrote some code making use of it several similar functions which were conceptually similar had signatures that were subtly mismatched.
Different programmers might have picked each patterns. And probably consistently made similar rules for the various projects they worked on.
To an LLM they are just happenstances and feel no friction.
A real project with real humans writing the code would notice the mismatch. Even if they aren't working on those parts at the same time just from working on it across say a weekend.
But how many more decisions do we make convenient only for us meat bags that a LLM doesn't notice?
Yes, but now you know about that classification of problem. So you learned something! As an Engineer have a choice now on what to do with that classification or problem.
Better yet, go up one level and and think about how to avoid the other classifications of problems you don't know about, how can the LLM catch these before it writes the code... etc.
What? Of course it makes a difference when I direct it away from a bad solution towards a good solution. I know as soon as I review the output and it has done what I asked, or it hasn't and I make a correction. Why would I need to wait 5 years? That makes no sense, I can see the output.
If you're using LLM's and you don't know what good/bad output looks like then of course you're going to have problems, but such a person would have the same problems without the LLM...
The problem is the LLMs are exceptionally good at producing output that appears good.
That's what it's ultimately been tuned to do.
The way I see this play out is output that satisfied me but that I would not produce myself.
Over a large project that adds up and typically is glaringly obvious to everyone but the person who was using the LLM.
My only guess as to why that is, is because most of what we do and why we do it we're not conscious of. The threshold we'd intervene at is higher than the original effort it takes to do the right thing.
If these things don't apply to you. Then I think you're coming up to a golden era.
I think the way you’re using these tools that makes you feel this way is a choice. You’re choosing to not be in control and do as little as possible.