The examples that you and others provide are always fundamentally uninteresting to me. Many, if not most, are some variant of a CRUD application. I have yet seen a single ai generated thing that I personally wanted to use and/or spend time with. I also can't help but wonder what we might have accomplished if we devoted the same amount of resources to developing better tools, languages and frameworks to developers instead of automating the generation of boiler plate and selling developer's own skills back to them. Imagine if open source maintainers instead had been flooded with billions of dollars in capital. What might be possible?
And also, the capacities of llms are almost besides the point. I don't use llms but I have no doubt that for any arbitrary problem that can be expressed textually and is computable in finite time, in the limit as time goes to infinity, an llm will be able to solve it. The more important and interesting questions are what _should_ we build with llms and what should we _not_ build with them. These arguments about capacity are distracting from these more important questions.
Considering how much time developers spend building uninteresting CRUD applications I would argue that if all LLMs can do is speed that process up they're already worth their weight in bytes.
The impression I get from this comment is that no example would convince you that LLMs are worthwhile.
The problem with replying to the proof-demanders is that they'll always pick it apart and find some reason it doesn't fit their definition. You must be familiar with that at this point.
I looked closely enough to confirm there were no architectural mistakes or nasty gotchas. It's code I would have been happy to write myself, only here I got it written on my phone while riding the BART.
See this is a perfect example of OPs statement! I don't care about the lines, I care about the output! It was never about the lines of code.
Your comment makes it very clear there are different viewpoints here. We care about problem->solution. You care about the actual code more than the solution.
>not include leaked secrets, malware installation, stealth cryptomining etc.
Not sure what your point is exactly, but those things don't bother me because I have no control over what happens on others computers. Maybe you insinuate that LLMs will create this, If so, I think you misunderstand the tooling. Or mistake the tooling with the operator.