Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Mass surveillance is never an appropriate solution, let's start with that.

I don't belive tech has an over weighted responsibility to solve society's problems, and in fact it's generally better if we don't try and pretend more tech is the answer.

Advocating for more money and more prioritization for this area of law enforcement is still the way to go if it's a priority area. Policing seems to be drifting towards "mall cop" work, giving easy fines, enabled by lazy electronic surveillance casting a wide net. Let's put resources towards actual detective work.



I would prefer advocating more money for mental health as it would provide additional benefits in other areas of society down the line too? I can't imagine child porn consumption rising from a healthy mind.


When tech creates problems should tech tried to solve it or should tech be limited?

We deceive ourselves honestly by pretending like we have not created new realities which are problematic at scale. We have. They are plentiful. And if people aren’t willing that we walk back tech to reduce the problems and people aren’t willing to accept technical solutions which are invasive then what are we to do? Are we just to accept a new world with all these problems stemming from unintended consequences of tech?


“Tech”? What do you mean by “tech?” Do you expect Apple to remove the camera, storage, and networking capabilities of all their devices? That’s the “tech” that enables this.


I mean "tech" did a lot of messed up things - there is a reason why "what is your favorite big tech innovation: 1) illegal cab company 2) illegal hotel company 3) fake money for criminals 4) plagiarism machine" is a funny joke.

Enabling people to talk to each other without all their communication being wiretapped and archived forever is not one of those, I would say.


Those aren't really "tech innovations", though, aside from maybe the plagiarism machine.

Uber and AirBnB are just using very-widely-available technology—that some taxi services and hotels are also using!—and claiming that they're completely different when the main difference is that they're just ignoring the laws and regulations around their industries.

Cryptocurrencies are using a tech innovation as a front for what's 99.9999% a financial "innovation"...which is really just a Ponzi scheme and/or related scams in sheep's clothing.

LLMs are genuinely a tech innovation, but the primary problem they bring to the fore is really a conversation we've needed to have for a while about copying in the digital age. The signs have been there for some time that such a shift was coming; the only question was exactly when.

In none of these cases is technology actually doing anything "messed up". Companies that denote themselves as being "in the tech industry" do bad things all the time, but blaming the technology for the corporate (and otherwise human) malfeasance is very unhelpful. In particular, trying to limit technological progress, or ban widely useful technological innovations because a small minority of people use them for ill, is horrifically counterproductive.

Enforce the laws we have better, be more willing to turn the screws on people even if they have lots of money, and where necessary put new regulations on human and corporate behavior in place (eg, requiring informed consent to have works you created included in the training set of an LLM or similar model).


> When tech creates problems should tech tried to solve it or should tech be limited?

You haven't explained the problem 'tech' has created, I'm confused as to what your point is?

CSAM isn't a problem caused by 'tech' unless you're going back to the invention of the camera, and I think that toothpaste is well out of the tube.

Additionally, and this is where a whole of arguments about this go wrong, the important part: the actual literal abuse, is human to human. There is no technology involved whatsoever.

Technological involvement may be an escalation of offense, but it's vanishingly secondary.


Mass surveillance is bad, but I think there are versions of it that are far less bad than others.

Apple's proposed solution would have theoretically only reported cases that were much more than likely to be already known instances of CSAM (i.e. not pictures of your kids), and if nothing else is reported, can we say that they were really surveilled? In some very strict sense, yes, but in terms of outcomes, no.


How about we start with this version of surveillance: currently it is almost impossible, and frankly stupid, for kids to ask for help with abuse, because they'll end up in this sort of system (with no way out one might add)

https://www.texastribune.org/2022/07/26/texas-foster-care-ch...

So how about we implement mass-surveillance by giving victims a good reason to report crimes? Starting with not heavily punishing victims that do come forward. Make the foster care system actually able to raise kids reasonably.

Because, frankly, if we don't do it this way, what's the point? Why would we do anything about abuse if we don't fix this FIRST? Are we really going to catch sexual abuse, then put the kids into a state-administered system ... where they're sexually, and physically, and mentally, and financially abused?

WHY would you do that? Obviously that doesn't protect children, it only hides abuse, it protects perpetrators in trade for allowing society to pretend the problem is smaller than it is.


ok, and in theory, with new generative algorithms, do you think it's still ok? Suppose apple implements this, suppose someone finds a way to generate meme images that can trigger apple's algorithm(but human can't see anything wrong), suppose that someone wants to harm you and sends you a bunch of memes and you save them. What will happen? Or what does happen if somebody is using generative algorithm to create csam like images by using people's face as base but the rest of the image is generated, should this also trigger csam?

Also, you can not guarantee that apple/google will use only known instances of csam, what if, govt orders them/google to scan for other type of content under the hood, like documents or god knows what else bc govt want's to screw that person (for the sake of example let's suppose the targeted person is some journalist that discovered shady stuff and govt wants to put em in prison), bc you know, you don't have access to either algorithms and csam scan list that they are using, system could be abused and usually could means 'sometime' it will


These criticisms are reasonable criticisms of a system in general, but Apple's design featured ways to mitigate these issues.

I agree that the basic idea of scanning on device for CSAM has a lot of issues and should not be implemented. What I think was missing from the discourse was an actual look at what Apple were suggesting, in terms of technical specifics, and why that would be well designed to not suffer from these problems.


Apple's mitigations and their inadequacy were discussed.


Mass surveillance isn't necessarily bad. It depends how it's implemented. The solution you describe is basically how it works with the intelligence agencies, in that only a miniscule fraction of the data collected in bulk ever reaches human eyes. The rest ends up being discarded after the retention period.

In terms of outcomes, almost nobody is actually surveilled, as the overall effect is the same as no data having been collected on them in the first place.

That said, I am personally more comfortable with my country's intelligence agencies hoovering up all my online activity than I am with the likes of Apple. The former is much more accountable than the latter.


Correction: the rest just ends up getting searched thousands of times, in direct violation of the US constitution.


If your ex-spouse was a contractor for a government agency with access to the mass surveillance machine, would you still feel comfortable "that only a miniscule fraction of the data collected in bulk ever reaches human eyes?"

What if you were a candidate for political office, pushing opinions that angered large swaths of the Intelligence Comminity?

The "minuscule fraction" of content is not surfaced by some random roll of the dice - it's the definitionally most interesting content, in the sense that some human went specifically looking for it in the heap of content caught in the dragnet. And it only needs to be interesting to at least one person with the clearance to search for it.

Maybe that means it's a video of a child being abused, and some morally upstanding federal officer is searching for it because anyone possessing it is ethically and legally culpable for the abuse of that child... Or maybe it's a PDF containing evidence of FBI kidnapping and torturing innocent civilians, and some morally corrupt federal officer is searching for it because anyone possessing it is a liability who needs to be silenced... Or maybe it's a JSON file containing the GPS locations of an individual for the past year, and some emotionally scorned federal contractor is searching for it because that individual is their ex-spouse who's moved onto a new partner.

Are you really prepared to put your faith in the trustworthiness and moral clarity of the population of 100k+ people with federal security clearances?


What leads you to believe that access to search these datasets is some sort of unregulated, unmonitored free-for-all for anyone allowed to wander into an intelligence agency building?

The scenarios you invented sound very far-fetched to me, if these did happen I very much doubt the perpetrator would be able to get away with it.


> In 2021 alone, the FBI conducted up to 3.4 million warrantless searches of Section 702 data to find Americans’ communications

https://www.eff.org/deeplinks/2023/04/internal-documents-sho...

> At least a dozen U.S. National Security Agency employees have been caught using secret government surveillance tools to spy on the emails or phone calls of their current or former spouses and lovers in the past decade, according to the intelligence agency’s internal watchdog.

https://www.reuters.com/article/us-usa-surveillance-watchdog...


> Mass surveillance is never an appropriate solution, let's start with that.

I think we’re well beyond that point now. Whether or not encryption is allowed or not and however private you believe your virtual life to be, in the physical world surveillance is the norm. Your physical location, biometric information, and relationships can and will be monitored and recorded.


Every other aspect of life has been impacted by the computer's ability to process lots of information at speed. To say "no, policing must not use these tools but everyone else can" seems - well, quixotic, maybe?

If illegal data (CP) is being transferred on the net, wiretapping that traffic and bringing hits to the attention of a human seems like a proportional response.

(Yes, I know, it's not going to be 100% effective, encryption etc, but neither is actual detective work.)


If you have reasonable evidence, wiretapping a suspect to gain more evidence is fine. On the other hand wiretapping everyone in hope of finding some initial evidence, that is not okay at all.


But that's just a restatement of the OP's position ("Mass surveillance is never an appropriate solution"). You're not attempting to justify that position.


That's just an axiom for me, no justification needed. My life is my life and it is not the business of the state to watch every step I do as long as I am not affecting others in any relevant way. You convince me that I or society as a whole would be better off if I allowed the state to constantly keep an eye on me, then I might change my opinion and grant the state the permission to violate my privacy.


> That's just an axiom for me, no justification needed

Congrats, you've got a religion.


That's nonsense, every worldview must be grounded in some axioms, that does not make it a religion. I can break it down somewhat more for you. The state has no powers besides the ones granted by its citizens. I value my privacy highly and need very good reasons to grant the state permission to violate it. Catching criminals does not clear the bar, there are other ways to do this that do not violate my privacy.


It’s guaranteed by our constitution, among other reasons. Search of my communications for no reason is, by definition “unreasonable search.”


That document with 27 amendments?


Yes?

If you want to get it amended, then by all means, make a case for why it should be amended.

In the meantime you wanted to know why mass surveillance isn’t an option. The answer “because it’s against the law” is a simple, good answer.

If you want to know why we decided as a nation to make that such a fundamental law that it is in our constitution, you could do worse than reading about what prompted the writing of the Bill of Rights.

I agree with a lot of the original reasoning.


The answer “because it’s against the law” is a simple, good answer.

While often true, at all times there have also been morally wrong laws, so it would not be unreasonable to counter that being written into law on itself means nothing. So you should always be prepared to pull out and defend the reasoning behind a law, which you also hinted at in your following sentences.


Why is it not okay at all? That's what our intelligence agencies do with their bulk data collection capabilities, and they have an immense positive impact on society.


If you want to argue that they can scan people outside the country and not US citizens, and that that has a benefit, go ahead and make that argument. You might even convince me.

But it’s just begging the question to say there’s immense benefit to them searching US citizens’ communications without a reason.

That’s the whole question.

Show me why we should change the constitution which guarantees us freedom from this sort of government oppression.


I'm writing from a UK perspective so there's no underlying constitutional issue here like there might be in the US. Bulk data collection is restricted by specific laws and this mandates regular operational oversight by an independent body, to ensure that both the collection and each individual use of the data is necessary and proportionate.

Some of this will include data of British citizens, but the thing is, we have a significant home-grown terrorism problem and serious organised criminal gang activity, happening within the country. If intelligence analysts need to look at, for example, which phone number contacted which other phone number on a specific date in the recent past, there's no other way to do this other than bulk collect all phone call metadata from the various telecom operators, and store it ready for searching.

The vast majority of that data will never be seen by human eyes, only indexed and searched by automated systems. All my phone calls and internet activity will be in there somewhere, I'm sure, but I don't consider that in itself to be government oppression. Only if it's used for oppressive purposes, would it become oppressive.


[...] and they have an immense positive impact on society.

For this I need proof.


s/positive/negative/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: