Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There's a lot worse danger there, like potentially agreeing to things you would never have agreed to.

If it doesn't have my hand-written signature then it's not an agreement.

I can't "agree" to something I haven't read, haven't been notified about, and haven't agreed to.

A robot clicking things does not mean I agreed to whatever it clicked it. It means that the the other end of the transaction can't tell the difference between me and a robot.



If it doesn't have my hand-written signature then it's not an agreement.

I wouldn't bet my company on that.

If someone sends us an email - probably with legal weasel words included saying it's only for use by the intended recipient - and asks for confirmation of something by following a private link and pressing a confirm button and if as far as that supplier is aware we have then done that and continued accordingly then I would not be at all surprised if a court found that they were acting in good faith and we were not.

Similarly if we sent something to a customer and their own IT system actively simulated a sequence of actions the intended recipient would take to confirm something then I would hope that would stand up in court as well. Otherwise we enter a legal climate where no-one in business can ever say or do any little thing without some kind of verified human approval process. It's hard to imagine how annoying and inefficient that might become for everyone. Maybe it would prompt a new generation of electronic communications and record-keeping where authenticity was built in unlike many of today's most common technologies - but it would still be a nightmare to do business until that happened.


> If someone sends us an email - probably with legal weasel words included saying it's only for use by the intended recipient - and asks for confirmation of something by following a private link and pressing a confirm button and if as far as that supplier is aware we have then done that and continued accordingly then I would not be at all surprised if a court found that they were acting in good faith and we were not.

Yes exactly. Given the escalating sophistication of malicious actors and their robots, the scenario you present is viable today. That's dangerous in ways that I cannot even begin to articulate, and I'm not even well-versed in contract law.

> if we sent something to a customer and their own IT system actively simulated a sequence of actions the intended recipient would take to confirm something then I would hope that would stand up in court as well.

I absolutely hope that would not stand up in court.

To my knowledge a contract requires (at a minimum) a meeting of minds and consideration. You cannot agree to something you did not know about. You cannot come to a meeting of minds if you weren't told about it. That payment could be for anything. You might think it's for this one service and I might think it's for all services provided, not just the one I clicked on. Who will a court side with if you cannot prove that I, a human, agreed to it?

> Otherwise we enter a legal climate where no-one in business can ever say or do any little thing without some kind of verified human approval process.

Yup! And I say that's a good thing given that non-humans cannot enter into agreements, and there are plenty of non-humans who have no idea what they're getting themselves into in the current form of electronic agreements.

> It's hard to imagine how annoying and inefficient that might become for everyone.

You mean... you might have to actually employ people to verify that your customers are actual humans? That's a good thing all around. Unless you can't afford to employ people, in which case: your business does not have a valid business model.


> To my knowledge a contract requires (at a minimum) a meeting of minds and consideration. You cannot agree to something you did not know about.

If you use software that auto-accepts everything sent to it, it may not create a contract, but that doesn't necessarily mean it doesn't create a prima facie reasonable expectation by the other party that you agreed to a contract. Especially if the software you intentionally set up is specifically designed to simulate a human action like clicking a button.

If that has any negative consequences for the other party, you could be on on the hook for various kinds of negligence or fraud. You might even find out you did have a contract. A corporation can be held to a contract if it's accepted by an employee who a reasonable counterparty would have expected to have the authority to accept it... even if that employee was specifically told not to accept it and therefore did not have that authority. Computers aren't the only things that can go wrong.

Try auto-submitting a bunch of Amazon orders and refusing to pay.

It probably doesn't apply for this particular email nonsense, because if you're just using some email provider that practically everybody uses, a court's naturally going to be inclined to say that you've met the ordinary standard of care. I mean, I would say it's negligent and stupid to use any of the big email providers, but I'm not in charge.

The people who should be in actual trouble would be Microsoft. But of course that's heretical and won't happen.

> And I say that's a good thing given that non-humans cannot enter into agreements, and there are plenty of non-humans who have no idea what they're getting themselves into in the current form of electronic agreements.

Billions of dollars worth of securities trading happens per day without human approval. I wouldn't be surprised if it's actually trillions. Nobody gets out of those contracts if they intentionally set up software to trade. Not even if their software has horrible bugs and submits orders they'd never have approved manually.


> If it doesn't have my hand-written signature then it's not an agreement.

Fortunately, many courts do not agree, given that lot of disabled folks would be SOL under the "physical ink signature only" theory of agreeing to contracts. Not to mention a bunch of e-commerce considerations.

> can't "agree" to something I haven't read

I realize that is part of an ANDed condition, but it's absolutely possible for people to agree--in very definitive ways--to something they didn't read.

The alternative leads to: "Judges hate this one weird trick! Just say you didn't actually read it!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: