Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As the number of self-corrections increases, it also increases the likelihood that it will say "oh that's not right, let me try a different approach" after finding the correct solution. Then you can get into a second-guessing loop that never arrives at the correct answer.

If the self-check is more reliable than the solution-generating process, that's still an improvement, but as long as the model makes small errors when correcting itself, those errors will still accumulate. On the other hand, if you can have a reliable external system do the checking, you can actually guarantee correctness.



Error correction is possible even if the error correction is itself noisy. The error does not need to accumulate, it can be made as small as you like at the cost of some efficiency. This is not a new problem, the relevant theorems are incredibly robust and have been known for decades.


Can you link me to a proof demonstrating that the error can be made arbitrarily small? (Or at least a precise statement of the theorem you have in mind.) I would think that if the last step of error correction turns a correct intermediate result into an incorrect final result with probability p, that puts a lower bound of p on the overall error rate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: