Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I noticed the same thing, but wasn't able to put it into words before reading that. Been experimenting with LLM-based coding just so I can understand it and talk intelligently about it (instead of just being that grouchy curmudgeon), and the thought in the back of my mind while using Claude Code is always:

"I got into programming because I like programming, not whatever this is..."

Yes, I'm building stupid things faster, but I didn't get into programming because I wanted to build tons of things. I got into it for the thrill of defining a problem in terms of data structures and instructions a computer could understand, entering those instructions into the computer, and then watching victoriously while those instructions were executed.

If I was intellectually excited about telling something to do this for me, I'd have gotten into management.





Same. This kind of coding feels like it got rid of the building aspect of programming that always felt nice, and it replaced it entirely with business logic concerns, product requirements, code reviews, etc. All the stuff I can generally take or leave. It's like I'm always in a meeting.

>If I was intellectually excited about telling something to do this for me, I'd have gotten into management.

Exactly this. This is the simplest and tersest way of explaining it yet.


Because you are not coding, you are building. I've been coding since I was 7 years old, now I'm building.

I'd go one step higher, we're not builders, we're problem solvers.

Sometimes the problem needs building, sometimes not.

I'm an Engineer, I see a problem and want to solve it. I don't care if I have to write code, have a llm build something new, or maybe even destroy something. I want to solve the problem for the business and move to the next one, most of the time it is having a llm write code though.


Maybe I don't entirely get it, but what is stopping you to just continue coding?

Speaking for myself, speed. I’d be noticeably slower than my peers if I was crafting code by hand all day.

That's what I'm doing on my codebases, while I still can. I only use Claude if I need to work on a different team's code that uses it heavily. Nothing quite gets a groan from me like opening up a repo and seeing CLAUDE.md

Same same. Writing the actual code is always a huge motivator behind my side projects. Yes, producing the outcome is important, but the journey taken to get there is a lot of fun for me.

I used Claude Code to implement a OpenAI 4o-vision powered receipt scanning feature in an expense tracking tool I wrote by hand four years ago. It did it in two or three shots while taking my codebase into account.

It was very neat, and it works great [^0], but I can't latch onto the idea of writing code this way. Powering through bugs while implementing a new library or learning how to optimize my test suite in a new language is thrilling.

Unfortunately (for me), it's not hard at all to see how the "builders" that see code as a means to an end would LOVE this, and businesses want builders, not crafters.

In effect, knowing the fundamentals is getting devalued at a rate I've never seen before.

[^0] Before I used Claude to implement this feature, my workflow for processing receipts looked like this: Tap iOS Shortcut, enter the amount, snap a pic of the receipt, type up the merchant, amount and description for the expense, then have the shortcut POST that to my expenses tracking toolkit which, then, POSTs that into a Google Sheet. This feature amounted the need for me to enter the merchant and amount. Unfortunately, it often took more time to confirm that the merchant, amount and date details OpenAI provided were correct (and correct it when details were wrong, which was most of the the time) than it did to type out those details manually, so I just went back to my manual workflow. However, the temptation to just glance at the details and tap "This looks correct" was extremely high, even if the info it generated was completely wrong! It's the perfect analogue to what I've been witnessing throughout the rise of the LLMs.


What I have enjoyed about programming is being able to get the computer to do exactly what I want. The possibilities are bounded by only what I can conceive in my mind. I feel like with AI that can happen faster.

> get the computer to do exactly what I want.

> with AI that can happen faster.

well, not exactly that.


For simple things it can. But then for more complex things that's where I step it

Have you an example of getting a coding chatbot to do exactly what you want?


The examples that you and others provide are always fundamentally uninteresting to me. Many, if not most, are some variant of a CRUD application. I have yet seen a single ai generated thing that I personally wanted to use and/or spend time with. I also can't help but wonder what we might have accomplished if we devoted the same amount of resources to developing better tools, languages and frameworks to developers instead of automating the generation of boiler plate and selling developer's own skills back to them. Imagine if open source maintainers instead had been flooded with billions of dollars in capital. What might be possible?

And also, the capacities of llms are almost besides the point. I don't use llms but I have no doubt that for any arbitrary problem that can be expressed textually and is computable in finite time, in the limit as time goes to infinity, an llm will be able to solve it. The more important and interesting questions are what _should_ we build with llms and what should we _not_ build with them. These arguments about capacity are distracting from these more important questions.


Considering how much time developers spend building uninteresting CRUD applications I would argue that if all LLMs can do is speed that process up they're already worth their weight in bytes.

The impression I get from this comment is that no example would convince you that LLMs are worthwhile.


The problem with replying to the proof-demanders is that they'll always pick it apart and find some reason it doesn't fit their definition. You must be familiar with that at this point.

Worse, they might even attempt to verify your claims e.g. "When AI 'builds a browser,' check the repo before believing the hype" https://www.theregister.com/2026/01/26/cursor_opinion/

> exactly the way I wanted it to be built

You verified each line?


I looked closely enough to confirm there were no architectural mistakes or nasty gotchas. It's code I would have been happy to write myself, only here I got it written on my phone while riding the BART.

What? Why would you want to?

See this is a perfect example of OPs statement! I don't care about the lines, I care about the output! It was never about the lines of code.

Your comment makes it very clear there are different viewpoints here. We care about problem->solution. You care about the actual code more than the solution.


> I don't care about the lines, I care about the output! It was never about the lines of code.

> Your comment makes it very clear there are different viewpoints here.

Agreed.

I care that code output not include leaked secrets, malware installation, stealth cryptomining etc.

Some others don't.


>not include leaked secrets, malware installation, stealth cryptomining etc.

Not sure what your point is exactly, but those things don't bother me because I have no control over what happens on others computers. Maybe you insinuate that LLMs will create this, If so, I think you misunderstand the tooling. Or mistake the tooling with the operator.


Is this a joke? Are you genuinely implying that no one has ever got an LLM to write code that does exactly what they want?

No. Mashing up other peoples' code scraped from the web is not what I'd call writing code.

Can you not see how you truly, deep down, are afraid you might be wrong?

It's clouding your vision.


This gets at the heart of the quality of results issues a lot of people are talking about elsewhere here. Right now, if you treat them as a system where you can tell it what you want and it will do it for you, you're building a sandcastle. Instead of that, also describe the correct data structures and appropriate algorithms to use against them, as well as the particulars of how you want the problem solved, it's a different situation altogether. Like most systems, the quality of output is in some way determined by the quality of input.

There is a strange insistence on not helping the LLM arrive at the best outcome in the subtext to this question a lot of times. I feel like we are living through the John Henry legend in real time


> I got into it for the thrill of defining a problem in terms of data structures and instructions a computer could understand, entering those instructions into the computer, and then watching victoriously while those instructions were executed.

You can still do that with Claude Code. In fact, Claude Code works best the more granular your instructions get.


> Claude Code works best the more granular your instructions get.

So best feed it machine code?


Funny you say that. Because I have never enjoyed management as much as being hands on and directly solving problems.

So maybe our common ground is that we are direct problem solvers. :-)


For some reason this makes me think of a jigsaw puzzle. People usually complete these puzzles because they enjoy the process where on the end you get a picture that you can frame if you want to. Some people seem to want to get the resulting picture. No interest in process at all.

I guess that's the same people who went to all those coding camps during their hay day because they heard about software engineering salaries. They just want the money.


When I last bought a Lego Technic set because I wanted to play with making mechanisms with gears and stuff, I assembled it according to the instructions, which was fun, and then the final result was also cool and I couldn't bear to dismantle it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: