Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We don't have the infrastructure for it, but models could digitally sign all generated messages with a key assigned to the model that generated that message.

That would prove the message came directly from the LLM output.

That at least would be more difficult to game than a captcha which could be MITM'd.





Hosted models could do that (provided we trust the providers). Open source models could embed watermarks.

It doesn’t really matter, though: you can ask a model to rewrite your text in its own words.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: