The machine currently does not have it's own model of reality to check against, it is just a statistical process that is predicting the most likely next word, errors creep in and it goes astray (which happens a lot)
Interesting that both scientist are speaking about machine learning based models for this verification process. Now these are also statistical processes, therefore errors may also creep in with this approach...
Amusing analogy: the Androids in "Do Androids dream of electric sheep" by Philip K Dick also make things up, just like an LLM. The book calls this "false memories"
But no, LLM's make things up, and it's a known problem and it is called 'hallucination'. even wikipedia says so: https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...
The machine currently does not have it's own model of reality to check against, it is just a statistical process that is predicting the most likely next word, errors creep in and it goes astray (which happens a lot)
Interesting that researchers are working to correct the problem: see interviews with Yoshua Bengio https://www.youtube.com/watch?v=I5xsDMJMdwo and Yann LeCun https://www.youtube.com/watch?v=mBjPyte2ZZo
Interesting that both scientist are speaking about machine learning based models for this verification process. Now these are also statistical processes, therefore errors may also creep in with this approach...
Amusing analogy: the Androids in "Do Androids dream of electric sheep" by Philip K Dick also make things up, just like an LLM. The book calls this "false memories"