Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean, yeah, if you specifically like lighting off fireworks at the gas station, you should buy your own gas station, make sure it's far away from any other structures, ensure that the gas tanks and lines are completely empty, and then do whatever pyromanic stuff you feel like safely.

Same thing with OpenClaw. Install it on its own machine, put it on its own network, don't give it access to your actual identity or anything sensitive, and be careful not to let it do things that would harm you or others. Other than that, have fun playing with the agent and let it do things for you.

It's not a nuke. It can be contained. You don't have to trust it or give it access to anything you aren't comfortable being public.



There's absolutely no way to contain people who want to use this for misdeeds. They are just getting starting now and will make the web utter fucking hell if they are allowed to continue.


> There's absolutely no way to contain people who want to use this for misdeeds.

There is no practical way to stop someone from going to a crowded mall during Christmas shopping season and mowing people down with a machine gun. Yet, we still haven't made malls illegal.

> ... if they are allowed to continue.

You may have a fantastic new idea on how we can create a worldwide ban on such a thing. If so, please share it with the rest of us.


If you can come up with a technical and legal approach that contains the misdeeds, but doesn't compromise the positive uses, I'm with you. I just don't see it happening. The most you can do is go after operators if it misbehaves.

I've been around since before the web. You know what made the Internet suck for me? Letting people act anonymously. Especially in forums. Pre-web, I was part of a local network of BBS's, and the best thing about it was anonymity was simply forbidden. Each BBS operator in the network verified the identity of the user. They had to post in their own names or be banned. We had moderators, but the lack of anonymity really ensured people behaved. Acting poorly didn't just affect your access to one BBS, but access to the whole network.

Bots spreading crap on the web? It's merely an increment over the problem of allowing anonymous users. You can't solve one while maintaining anonymity.


I don't care about the "positive" uses. Whatever convenience they grant is more than tarnished by skill and thought degeneration, lack of control and agency, etc. We've spent two decades learning about all the negative cognitive effects of social media, LLMs are speed running further brain damage. I know two people who've been treated for AI psychosis. Enough.


Again, I'm not disagreeing with the harm.

But I think drawing the line of banning AI bots is highly convenient. If you want to solve the problem, disallow anonymity.

Of course, there are (very few) positive use cases for online anonymity, but to quote you: "I don't care about the positive uses." The damage it did is significantly greater than the positives.

At least with LLMs (as a whole, not as bots), the positives likely outnumber the negatives significantly. That cannot be said about online anonymity.


Okay, but what are you actually proposing? This genie isn't going back in the bottle.


At a minimum, every single who has been slandered, bullied, blackmailed, tricked, has suffered psychological damage, etc. as a result of a bot or chat interface should be entitled to damages from the company authoring the model. These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this. If they can't do this, the penalties must be severe.

There are many ways to put the externalities back on model providers, this is just the kernel of a suggestion for a path forward, but all the people pretending like this is impossible are just wrong.


> should be entitled to damages from the company authoring the model.

1. How will you know it's a bot?

2. How will you know the model?

Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

> These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Ouch. Throw due process out the door!

> Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this.

This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.


> 1. How will you know it's a bot? > 2. How will you know the model?

Sounds like a problem for the platforms and model vendors to figure out!

> Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

I mean providers are obviously my primary concern as the people selling something to the public, but sure, why not both.

> Ouch. Throw due process out the door!

There's lots of prior art for this, let's not pretend like this is something new. The NLRB adjudicates labor complaints and disputes, the DoT adjudicates complaints about airlines, etc.

> This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Once again, sounds like a problem for the platforms to figure out! How do they handle spammers and abusers today? Throw up their hands? Guess they won't be able to do that for long!

> Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.

Sounds like a diplomatic problem, if it actually is a problem. In reality the social harms of AI may exceed any supposed benefits. The optimistic case seems to be that AI becomes so powerful it causes a massive hemorrhaging of jobs in knowledge work (and later other forms of work). Still waiting to see any social benefits!


> Sounds like a problem for the platforms and model vendors to figure out!

> sounds like a problem for the platforms to figure out!

You'd have to fundamentally change how the Internet works to be able to figure these things out. To achieve this, you'd need cooperation from everybody, not just LLM providers.


> I don't care about the "positive" uses.

You should have stopped there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: