Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s very clever in how you can tell it that it was wrong and it admits this mistake. Has it been written in such a way that the operators instructions are given a higher weight compared to what it knows?

I’ve asked it to generate python code and then format it with black. It reproduced the original code and then what was supposed to be the formatted code, but it was identical.

When told of this it admitted it had made a mistake and this time correctly outputted the formatted code.



The operator prompts becomes part of the input vector (it has a certain context depth it can accommodate), whereas the data it was trained on actually affects the models weights and biases. You can think of it like “evaluating this statement in light of the prior context against the training set”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: