Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. LLMs at their core are just fancy autocomplete. Extremely fancy, to be sure, and the output that they predict can be very useful - but people who anthropomorphize them or ascribe higher significance to the generated output seem to be missing this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: