Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(joking) maybe they fed the LLM some postmodern text, so it got some notion of relativism and post structuralism...

But no, LLM's make things up, and it's a known problem and it is called 'hallucination'. even wikipedia says so: https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...

The machine currently does not have it's own model of reality to check against, it is just a statistical process that is predicting the most likely next word, errors creep in and it goes astray (which happens a lot)

Interesting that researchers are working to correct the problem: see interviews with Yoshua Bengio https://www.youtube.com/watch?v=I5xsDMJMdwo and Yann LeCun https://www.youtube.com/watch?v=mBjPyte2ZZo

Interesting that both scientist are speaking about machine learning based models for this verification process. Now these are also statistical processes, therefore errors may also creep in with this approach...

Amusing analogy: the Androids in "Do Androids dream of electric sheep" by Philip K Dick also make things up, just like an LLM. The book calls this "false memories"



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: