Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you an example of getting a coding chatbot to do exactly what you want?





The examples that you and others provide are always fundamentally uninteresting to me. Many, if not most, are some variant of a CRUD application. I have yet seen a single ai generated thing that I personally wanted to use and/or spend time with. I also can't help but wonder what we might have accomplished if we devoted the same amount of resources to developing better tools, languages and frameworks to developers instead of automating the generation of boiler plate and selling developer's own skills back to them. Imagine if open source maintainers instead had been flooded with billions of dollars in capital. What might be possible?

And also, the capacities of llms are almost besides the point. I don't use llms but I have no doubt that for any arbitrary problem that can be expressed textually and is computable in finite time, in the limit as time goes to infinity, an llm will be able to solve it. The more important and interesting questions are what _should_ we build with llms and what should we _not_ build with them. These arguments about capacity are distracting from these more important questions.


Considering how much time developers spend building uninteresting CRUD applications I would argue that if all LLMs can do is speed that process up they're already worth their weight in bytes.

The impression I get from this comment is that no example would convince you that LLMs are worthwhile.


The problem with replying to the proof-demanders is that they'll always pick it apart and find some reason it doesn't fit their definition. You must be familiar with that at this point.

Worse, they might even attempt to verify your claims e.g. "When AI 'builds a browser,' check the repo before believing the hype" https://www.theregister.com/2026/01/26/cursor_opinion/

> exactly the way I wanted it to be built

You verified each line?


I looked closely enough to confirm there were no architectural mistakes or nasty gotchas. It's code I would have been happy to write myself, only here I got it written on my phone while riding the BART.

What? Why would you want to?

See this is a perfect example of OPs statement! I don't care about the lines, I care about the output! It was never about the lines of code.

Your comment makes it very clear there are different viewpoints here. We care about problem->solution. You care about the actual code more than the solution.


> I don't care about the lines, I care about the output! It was never about the lines of code.

> Your comment makes it very clear there are different viewpoints here.

Agreed.

I care that code output not include leaked secrets, malware installation, stealth cryptomining etc.

Some others don't.


>not include leaked secrets, malware installation, stealth cryptomining etc.

Not sure what your point is exactly, but those things don't bother me because I have no control over what happens on others computers. Maybe you insinuate that LLMs will create this, If so, I think you misunderstand the tooling. Or mistake the tooling with the operator.


Is this a joke? Are you genuinely implying that no one has ever got an LLM to write code that does exactly what they want?

No. Mashing up other peoples' code scraped from the web is not what I'd call writing code.

Can you not see how you truly, deep down, are afraid you might be wrong?

It's clouding your vision.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: