Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You couldn't find a more clear case of straw man. No one said ChatGPT is more reliable then human doctors. No one said anything about incorrect responses.

Both the shallow research paper and that anecdote point is all focused on the basis of reliability. Henceforth, it doesn't discredit my point and question.

So how does one anecdotal point show that ChatGPT is more reliable than human doctors? If that is not the point to these celebrations, mentions, etc that both of you are doing then I can assume that it is far from the case then, unless you have a direct answer to that question?

> The fact of the matter is that ChatGPT has diagnosed a disease which 17(!!!) doctors have missed. Even if the incorrect responses is 999/1000 cases it still worth it to include ChatGPT in this process since it's so cheap.

So obviously as expected you still need human doctors regardless, as LLMs are opaque black-boxes which have a lack of transparent reasoning with unpredictable outputs. Thus how does that mean it can be used for medical advice or not?

Overall, this is a clear case of reliability to trust the output of an LLM and the entire point of the contrary is that for every 'correct' diagnosis made by the LLM, there are incorrect and vacuous responses which on top of its non-deterministic outputs comes with its lack of transparent explainability other than repeating on what it has been trained on to convince lesser expert users.



> So how does one anecdotal point show that ChatGPT is more reliable than human doctors? If that is not the point to these celebrations, mentions, etc that both of you are doing then I can assume that it is far from the case then, unless you have a direct answer to that question?

No one is saying chatGPT is more reliable then Doctors. Please keep this discussion grounded in reality.

> So how does one anecdotal point show that ChatGPT is more reliable than human doctors? If that is not the point to these celebrations, mentions, etc that both of you are doing then I can assume that it is far from the case then, unless you have a direct answer to that question?

It doesn't. It shows chatGPT beating 17 doctors in this case. That has NOTHING to do with reliability and that assumption is a big leap of logic you, and only you, are making.

> So obviously as expected you still need human doctors regardless, as LLMs are opaque black-boxes which have a lack of transparent reasoning with unpredictable outputs. Thus how does that mean it can be used for medical advice or not?

Yes, of course. That doesn't mean chatGPT couldn't be incorporated into doctors work flow and provide tangible value. No one is saying chatGPT should replace doctors.

Stop arguing against made up air castles. You are only fooling yourself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: