Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It sounds like this is a poisoning attack, which has been shown to be pretty trivially defeated [1]. That said, while poisoning countermeasures in the facial recognition case were shown to easily generalize, we dont know yet how general of a defense could be built for a VLM. Which means holding a 0day poisoning attack on a VLM could cause a lot of trouble / deaths before an update to the model with counter-training could be deployed..

[1] https://arxiv.org/abs/2106.14851





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: