Well, currently the top LLM fails at repeatedly generating a stable set of pretty simple regexes for me, so where pretty far from a full-blown LLM dystopia at this point.
> LLMs are too dumb to be useful, but smart enough to be destructive.
I think this puts the blame in the wrong place; LLMs are too dumb to be useful, but it's the users who are too dumb to realise that, or too unethical to care.
The question is does the rapid progress in LLMs continue over the next few years, or do we reach another local maximum. Is "currently" going to improve rapidly?
as I understand we're approaching local maximum with LLMs and that is what I base the previous comment on, but fully layman and no insight into deeper layers of LLM R&D