"In this paper, we show both theoretically and empirically, that these state-of-the-art detectors cannot reliably detect LLM outputs in practical scenarios."
This was my finding as well. I abandoned this effort when I found that any created model would be inaccurate on small texts and could be circumvented with a bit of prompt engineering.
Most people do not build/train large models locally - if you get to that point, you'll be renting cloud compute. Any computer that has a GPU and is within your budget is good to get started with intro models/Kaggle.