Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's literally what an LLM is.

They predict what the most likely next word is.



That's wrong and even if it were not wrong, it would still not fix the problem. What if the most likely response is "kill yourself"


'Predicts the most likely tokens' is not the same as 'pulls people towards normality'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: