Hacker Newsnew | past | comments | ask | show | jobs | submit | hardchaos's commentslogin

this is super cool, the named regions are a nice touch!


"In this paper, we show both theoretically and empirically, that these state-of-the-art detectors cannot reliably detect LLM outputs in practical scenarios."

This was my finding as well. I abandoned this effort when I found that any created model would be inaccurate on small texts and could be circumvented with a bit of prompt engineering.


Most people do not build/train large models locally - if you get to that point, you'll be renting cloud compute. Any computer that has a GPU and is within your budget is good to get started with intro models/Kaggle.


This is my experience too, regularly used XM4’s the past year, and tinnitus is no better/worse, as far as i can tell.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: