Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anyone who has bothered to cut through the hype of ChatGPT, would already have realised that it isn't smart at math and is only regurgitating the results that it has been already been trained on and will present it to you to make itself 'appear' intelligent.

It doesn't even know why it is gives you the wrong answer even when it is presenting the generated output as correct. Like what another commenter said, it has limited or in this case has little to no reasoning; because it is unable to transparently explain itself and cannot present novel solutions to unseen and unsolved problems in mathematics.

ChatGPT is as good as an abacus for mathematics, with the explainability of a brainless parrot.



GPT can be the generative (imagination) part of a system. The second part would be a system to validate. This means you can filter out the junk and keep the good samples, extending the training set. This is how the model can learn by performing massive search for solutions all on its own. A bit like AlphaGo.

So you might want to reevaluate it. It can sample really well, and it is hard to sample coherently in this combinatorial space. Sampling is half the task, validation needs more work.


Nobody has hyped chatGPT as being good at math




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: