Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You’d feel better if it was two people you don’t know? Because obviously any random person is 100% accurate, never mistaken, never making shit up?

I don’t understand the mindset, I really don’t. Why are humans held to such a lower standard?

 help



Despite all the anthropomorphizing of LLMs, you must have come across already how each has VERY DISTINCT failure modes?

Actually… no. Now that you mention it, and thanks for the interesting thought, the failure modes seem pretty similar to me.

Shoddy research / hallucination, tendency to lose the thread, lack of historical / background context… the failure modes are at least qualitatively similar.

Show me an LLM failure and I’ll show you a high profile journalist busted for the same thing. And those are humans who focus on these things!


Humans as a class are error prone but some humans in their respective fields are very very good. It's often not terribly hard to figure out based on resume and credentials who these folks are and as a shortcut we can look for markers in terms of terminology specifics confidence if it's less important like deciding what to read vs cancer care for your mom.

AI can trip all the right searches to fool these shortcuts whilst sometimes being entirely full of shit and they have no resume nor credentials to verify should we desire to check.

If you have such and vouch for it I can consider your trustworthiness rather than its. If you admit you yourself are reliant on it then this no longer holds




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: