Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's fine if it isn't perfect if whomever is spitting out answers assumes liability when the robot is wrong. But, what people want is the robot to answer questions and there to be no liability when it is well known that the robot can be wildly inaccurate sometimes. They want the illusion of value without the liability of the known deficiencies.

If LLM output is like a magic 8 ball you shake, that is not very valuable unless it is workload management for a human who will validate the fitness of the output.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: