Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, currently the top LLM fails at repeatedly generating a stable set of pretty simple regexes for me, so where pretty far from a full-blown LLM dystopia at this point.


LLMs are too dumb to be useful, but smart enough to be destructive.

You don't need to be able to generate correct regexes to create a blogspam article about regexes that will go to the top of the search results.


> LLMs are too dumb to be useful, but smart enough to be destructive.

I think this puts the blame in the wrong place; LLMs are too dumb to be useful, but it's the users who are too dumb to realise that, or too unethical to care.


It's also the creators, hosters, and startup founders who are overselling its capabilities.


I was lumping them in with 'users'; I don't think the evangelists are any less credulous than their marks.


search is dying anyway and this will just accelerate that trend.


The question is does the rapid progress in LLMs continue over the next few years, or do we reach another local maximum. Is "currently" going to improve rapidly?


as I understand we're approaching local maximum with LLMs and that is what I base the previous comment on, but fully layman and no insight into deeper layers of LLM R&D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: