Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[dead]
35 days ago | hide | past | favorite


Good hygiene. But the gap they admitted—NL attacks, prompt injection—won't get fixed by pointing another LLM at it.

LLM auditing LLM is like RAG solving reasoning. Same blindspot, twice.


It’s encouraging to see the conversation shifting from model performance toward execution responsibility and security structure.


Trand gold




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: