I'm not sure if many people have experienced losing the "practical obscurity" this article talks about firsthand. Property records are public. So are WHOIS records. But it's unrealistic to manually search all the property records, phone books, WHOIS records to find someone you're looking for. However, some websites index these, and some search engines index those sites. I've had people contact me that I would prefer couldn't find me because my information was searchable due to a WHOIS record.
There are people out there slurping up all the information they can find in the public and attempting to connect it together. If you want to see what I mean, go check out your mylife.com profile. Mine has a big red warning because I have neighbors with court records. It says I "may" have sex offenses. What the actual fuck? This company needs to eat shit.
This isn't about people tracking you to give you better ads. Its about people tracking you to sell information to whoever wants to pay them. Its way fucking worse.
I think that website might have a goal of making money from people paying to remove the bad content. I've seen this for arrest record sites. If they can get high SEO for your name it could affect job/interviews, especially low level stuff.
I didn't know about this site, but checking myself out with it, its information is remarkably incorrect. Maybe that means I'm doing a good job of being a hermit? It claims I'm Christian (I was born into a Lutheran family, so that's an understandable mistake), am married (ha!) and maintain relationships with many people (ha), proceeding to list 5 people I've never heard of[0]. Salary is wrong, home value is wrong, and the only email address it lists is about 20 years out of date. Really no better than the other various people search pages I've seen, which also failed to impress. Hell it even does the fake "we're looking up all this full report information for you... please pay at the next window" thing that they do. If this were the best the corps could track me I would say we have nothing to worry about, but I doubt it is.
This can be entertaining. Much of my information is incorrect or pending. My sister's information is almost entirely wrong and does not note she has been dead for several years. It also says she "may" have sex offenses. My father's first and last name are correct, as is the age he was when he died. On the other hand, the date of his death is more than 20 years off and everyone "related" to him is wrong, other than myself.
And this is exactly what GDPR is meant to prevent. My registrar now provides free whois privacy, so people can't see what domains I own at a cursory glance. There are still a lot of (government) public records on me, but at least companies can no longer combine their own data with this and sell it to the highest bidder without my consent.
Yep. There I am. I put in an old address, and it found me at my current (new state). Job is way wrong. Relations are mostly correct with a few unrelated people.
But I didn't want to click. I'm paranoid about click-through. I just scrolled past. Nothing to see here. Who's this guy? Who cares... (>_<)
I continue to be amazed by the level of nearly complete indifference that the general populace still has towards privacy. I don't think the politicians are going to do much of consequence until their constituents think it is important. Those who understand the danger of what is happening should take every opportunity to help others become aware. Possibly the biggest challenge of this is how to avoid coming across as a total paranoid wacko.
That's because to the average person, the privacy fears of the average HNer make no sense. Even if something extremely bad might happen as a result of violations of my privacy, the chances seem fairly low. In that sense, it's no different than millions of other fears high-impact-low-chance fears that we happily go through life ignoring.
What's your most convincing argument for why an average person should care more? Explain it to me like I'm 5.
Plenty that's extremely bad is already happening as a result of privacy violations. Probably affecting most of us. Trouble is they're invisible.
For one example, we saw with Cambridge Analytica that political advertising can be tuned to your personal hot button issues. Those worried by immigration can be sent an outright (and outrageous) lie that remaining in the EU will result in all of Turkey moving to the UK. Those with different interests/concerns got different ads.
Turns out many of them were illegal too. No one knew until too late because it was invisible to all except the advertiser, Facebook (in this case) and recipient.
Until it becomes visible - it's really hard to make people care, or realise. Perhaps a few on the receiving end of the brexit/CA issue realised, if they remember the ads they saw, and read one of the many follow up stories. Like anything algorithmic, unless you a-b test how do you know what you saw that others didn't, what you aren't seeing, or where it discriminates?
Keep this up and we could easily lose most of the progress of the 20th century, invisibly.
political advertising can be tuned to your personal hot button issues
And is this not what happens traditionally as well? Politicians give different speeches to different audiences. Candidates before election talk very differently to various groups of demographies. Candidates who meet individual voters on market squares and public places obviously try to find out which of the voter's wires are hot and try to deliberately connect on those levels that are meaningful to the potential voter.
The structure and interpretation of politicians predates internet and digital surveillance. "If you have to say it, it's not true" applies very well to politicians and especially to political advertisement no matter if it's targetted or general.
>Candidates who meet individual voters on market squares and public places obviously try to find out which of the voter's wires are hot and try to deliberately connect on those levels that are meaningful to the potential voter.
That's becuase, in terms of consent, you're consenting to such actions by going out to the public square. You're not knowingly consenting to your life's history being sold and traded on a black market that's then used to target you for such speeches when browsing on the internet.
For your analogy to be comparative to what's going on, you should, instead, think of it in terms of a politician showing up to solely your house because you're the sole democrat in a republican city in, say, Texas. (This demonstration based on the fact that this is an American board.) This could be easily inferred from your social media, browsing history, metadata collected by the government, etc.
This is not a very convincing argument because the public will say that they will trust their own judgement and that they are simply not susceptible to lies by politicians especially if they don't match with what is said in mainstream media.
EDIT: Not saying the argument is incorrect, just that it will not convince the public.
I wasn't making an argument, I was redressing the logical fallacy of the parent comment.
>the public will say that they will trust their own judgement
The public trusted it's own judgement before, during, and after the civil rights movement and that didn't really work-out too well in the public's favour, did it?
>...and that they are simply not susceptible to lies by politicians...
(Since this is an American board.) Do they believe that they weren't susceptible to the lies about the Iran-Contra Affair, Watergate, McCarthyism, surveillence by the government, MK Ultra, etc., either?
>...especially if they don't match with what is said in mainstream media.
See: the Leave Bus in the Brexit Referendum.
Clearly, billions of pounds are not going to the NHS, now, correct? Just because you see it and it's spread across social media, it doesn't mean it is implicitly true. All of your points were met for this example; yet, it was resultantly found to be a total fabrication. How would they account for that? Instead, will your argument now amended to be "some of it doesn't match it with social media"...?
You're not knowingly consenting to your life's history being sold and traded on a black market that's then used to target you for such speeches when browsing on the internet
I don't think the mechanism matters fundamentally here. I do think it is wise to consider all political speech fraudulent by default, whether it happens on public spaces or on the internet. They're all selling anyway, and if someone approaches you to sell something you know they have more to gain than you do.
Then, we have this internet which is somewhat good at targetting content based on your online activities. This internet is used by virtually everyone to do targetted advertising and promoting of products, ideas, and younameit. I find it weird this would not be OK when it comes to politicians and political parties. They're out selling as much as the next guy, and the way to do targetted advertising is online.
It is another discussion whether online tracking is good or bad, how much of that happens with or without a consent, implicitly or explicitly, but because online tracking is prevalent it would be foolish to assume that nothing you do online will be tracked. Any advertisement, political or otherwise, must be assumed to be targetted at you before interpreting its meaning.
I don't find a single thing that would make a targetted political advertisement inherently worse than any other targetted non-political advertisement or any non-targetted political advertisement.
>And is this not what happens traditionally as well? Politicians give different speeches to different audiences.
People seldom understand how changes in quantity/ease/volume translate to quantitative changes...
Politicians giving "different speeches to different audiences" (something that happens since time immemorial) and the technologically amplified ability for all the social and traditional online media one consumes to give them 24/7 targeted echo-bubble ads, news, and recommendations based on what audience group they belong to, is like the difference between eating a whole cake and filling a little stuffed, and gorging for a month on sweets upon sweets and dying from a heart attack...
The difference is traditionally it's a) visible, b) legal, and c) only targets in the most broad-brush terms the major groups in society - broadly in line with the manifesto. breaching regulations in an advert, speech or leaflet has resulted in consequences up to and including prosecuting or throwing out the whole result. I believe the Swiss voided a referendum result only the other week because the electorate didn't have enough or accurate information.
There's good reason many nations have laws severely restricting political advertising including spending limits and extra restrictions near voting time, or banning custom pricing and other forms of discrimination. Saying whatever bullshit it takes to get the vote/sale must continue to come with serious consequences or we're right back where we were in the 17th and 18th centuries.
And yes, the regulatory systems do seem to be failing to cope with the new way of things.
I don't know the details of that particular case, or of the Swiss in general, but the idea sounds really disturbing. I could just as easily translate "the supreme court has now voided the result on the grounds that voters were not given full information, and the vote must be re-run." into:
"The elites decided that the expressed will of the people can be ignored, because they were not exposed to enough propaganda beforehand to convince them to select the choice that the elite wanted. We'll just keep holding new referendums and pumping out propaganda until they make the correct choice, i.e. the choice that the elite want."
Well it was a pre-existing law, so it's not like the govt turned round and effectively said "sorry, wrong answer - try again". Anyway in this case it was the government found to have put out misleading information.
Not perfect, but not a bad sanction against achieving a result via disinformation or outrageous advertising claims.
So I see ads that other people don’t see? — I think you failed to convince 90% of people it’s “extremely bad”,( id say something like ww2 is “extremely bad” )
Death and destruction are absolutely terrible. They are not the only way to reshape or destroy a society. Like the parent comment said, the tools of mass manipulation that can be created with the data and platforms available today can have just as broad of an effect over time, if not moreso because it's invisible.
Targeted ads/messages beyond very broad categories such as "business owner" or "looking for a vacation" should be made illegal. This time the law needs to safe us from technology.
If that's really your marker, nothing but another war or the other horsemen will meet it. We can probably repeal all the laws on the books without hitting it.
The issue is not seeing something someone else may not - that's but a symptom that allows the burying of much illegal and discriminatory activity. There's been plenty of examples of both in recent years - Cambridge Analytica is just one.
You may have a look at all the most common fears and find how many of them could be connected to privacy at all, in any meaningful way. From irrational (clowns) to semi-rational (immigrants and robots) and rational (shootings) ones.
National Socialism is 'right' now? From what I can tell, despotic dictators will fly any flag if it give them power. Painting this as a partisian issue, while comparing the modern right with 'literal hitler' is just holding back rational discussion.
It wouldn't have happened either, way, all of Turkey would have gone to Germany.
> political advertising can be tuned to your personal hot button issues
But it was always already the case with newspapers and TV news, you wouldn't target the same population with Fox news vs CNN or the Sun vs the Guardian. And people wouldn't have seen ads for the "other side" since they wouldn't have opened the other newspaper or switched to the other channel
Probably true - the whole Cambridge campaign seemed to be built on outright lies.
> with newspapers and TV
That was always highly visible - so errors or breach of process was instantly caught. Nor could it break down beyond Sun readers or Fox viewers - which covers an awful lot of ground ranging from centre ground moderates to extremists on the fringes. You end up catching a fraction.
Now you can pick off just those thought susceptible, or false promises to those wanting and those not wanting $thing. Which starts to make a mockery of countries with funding or campaign restrictions.
It was always a fine game trying to promise all things to all voters, but remained within the bounds of legality and common sense. Now we seem firmly outside and we need a thorough refresh of oversight. Or I suppose we could believe Cambridge was a one-off.
For most Americans, the major privacy concerns are (or ought to be) around speeding and other more-or-less minor moving violations, illegal drug shipments, and copyright violations. For speeding etc, potential impacts are mainly fines, higher insurance rates, and potential license loss.
Illegal drug shipments for older people are mainly offshore pharmacies. And that's not heavily penalized, with confiscation and warning letters being the norm. Otherwise it's recreational drugs. And apparently that's also rarely prosecuted, except for dealers.
It also seems that copyright violations aren't being heavily prosecuted. I guess that the MPAA figured out that bad PR was worse than people pirating.
More generally, there's the issue of being gamed by well informed sellers. Basically paying more for stuff that they know we really want. And getting tricked into buying stuff that we don't really need. Also, electing candidates that we don't really want ;)
Unfortunately, if your "paranoia" successfully protects you, you can't prove that nothing bad happened because of it.
The world has a terrible habit of not counting bad things that should have happened but didn't, even in cases where we know with a fair degree of confidence that it should have gone worse.
Prime examples:
Y2K is often remembered as "Ha! Can you believe they ever thought that was going to be a real problem!"
It was predicted the Kuwait oil fires would burn for years and be a global environmental catastrophe. They were put out in six months.
We didn't celebrate this amazing success on par with the hand wringing we had done about the expected outcome. No one danced in the streets at this glorious news that we averted disaster globally. Instead, it was a minor footnote in stories with more drama.
Yes, it seems most people are unwilling to give up a concrete benefit to avoid a hypothetical threat. I think the privacy threat will only seem concrete when a higher percentage of people are expressing concern over it. I have not found a short, compelling argument that works for everyone, or even most everyone. The best approach I know is to tailor an argument to the individual in order to present the privacy threat they might most care about. Even with a tailored argument, I would guess that less than 20% of nontechnical people are swayed by what I say. I take hope in the fact that there is evidence societal norms can be changed by a surprisingly small minority[1].
I don’t think this is accurate. People are often frightened of unlikely things so long as they feel they can do something about it. But when a threat seems incomprehensible and they feel powerless to do anything about it (at least partially bc they do not understand it), they will shut it out and shut down rather than confront it, because it overwhelms their ability to cope with it emotionally and paralyzes them.
Incomprehensibility and powerlessness are overwhelming emotions. They flood the nervous system, so people shut them out in order to be able to continue to function on a daily basis. It’s the same reason why people continue to ignore climate change.
It’s not a rational problem, or a probabilistic one. It’s a problem of emotion. It’s a trauma problem.
If people are actually powerless to combat a threat, it's rational to shut it out. There's no sense worrying about things you can't control. The irrational response is to obsess about it and take mental energy away from things you can control.
(This is also similar to why we don't worry about an asteroid striking the earth, a pandemic wiping out humanity, or nuclear war. There is little that we personally can do, so why spend mental energy?)
Are you sure? It seems to me that most people just have more important stuff to worry about. Paying taxes, rent, buying food, health issues, personal safety etc. Privacy seems to be a luxury topic, and thus is more important to those that need not worry about the fundamentals.
> What's your most convincing argument for why an average person should care more?
Laymen Argument 1:
Mark Zuckerburg who has majority control of the largest source of private data in the world bought the two plots of land next to him for privacy reasons.
Laymen Argument 2:
Forward me all of your emails if privacy isn't an issue. crickets
The average person doesn't care about these because it doesn't affect them right now, but it's a "boil the frog" situation...you won't know you're getting boiled to death until it's too late.
> Mark Zuckerburg who has majority control of the largest source of private data in the world bought the two plots of land next to him for privacy reasons.
President of USA, travels with a ~100 men security cover all the time, does not mean USA is unsafe to move around and you should stay in home with a gun in hand.
To be fair US president is said to be single most powerful person in the world.
I don’t have any data to back it up but I could see them being no. 1 assassination target globally.
OTOH few years back I was walking and passing near former president of my small European country. Since he is rather charismatic I asked if I can shake his hand and get a photo. He agreed and we did so, however I learned few minutes later from a friend lagging behind, that couple random bystanders twitched and totally focused on our interaction. Obviously personal security. So it’s not like security isn’t there, we might just not see it.
I’m not sure about that counter-argument. POTUS is currently the most powerful position in the world, but between 1850 and 1915 (approximate dates), I think the British and possibly French empires were more powerful.
In this time period, three American presidents were assassinated. The UK lost one MP, who was not the PM, and apparently the killers only targeted him because he happened to be walking with their main target at the time. None of the UK royals were assassinated in that period.
I don't know for other heads of state, but the French president has a (small) service dedicated to his protection, the Groupe de sécurité de la présidence de la République (Security Group of the President of the Republic), whose name is a little more explicit than Secret Service.
Fun story: at the time of its creation, in 1983, under president Mitterand, one (unofficial) mission was the protection of the illegitimate daughter of the president.
Sure, but what makes the analogy not hold? Just as there are more parties interested in harming a president, there are more parties interested in spying on Zuckerberg than you or I.
> Forward me all of your emails if privacy isn't an issue.
What do I get in return? Privacy is a trade off, and that's how it's sold. "We need more CCTV cameras to stop crime". Do you want more crime or more privacy? Depends on how bad the crime is, I guess. "We need to record your likes to show you more stuff you like". Is FB worth it? It is to many, apparently.
It's highly interesting that nobody would even dream of saying something like "apparently, most people on Earth think transsexuals are evil", or "do global warming and the increasing extinctions of species matter? not according to most, apparently". Why do people in "tech" outsource their judgement on this topic to people who aren't even in the room, a mythical "average person", every single time the issue comes up?
What was the breaking point for, say, to start fighting for and making headway when it came to sexual self-determination? Once more than half of the people on the planet were for it? Or did Rosa Parks decide "now is the time, most people in the US support the end of segregation, now that it's only a formality, I won't change seats?". No, that's not how it works. You figure out what you consider to be right, and then you fight for it. You don't know the outcome, you know what you know to be right, or wrong. You get sick of something, and then you refuse to participate in it, because you'd rather die. And historically, it sometimes takes a small amount of dedicated people to change things for the better.
I haven't seen a person who is really strong-willed about not caring about their own privacy. It's not like most people who "don't care about privacy" are fighting against it, they're sleepwalking. They'll just as happily sleepwalk into another direction, so that they don't care can't be an excuse, ever. Right now, it's easier, and less socially painful, to not care about privacy. That can be changed.
If people don't have experience with totalitarianism, and don't learn about it, and don't smell it in the wind today, then of course they will balk at any inconvenience to prevent it. But if it happens again on sufficient scale, it'll be a point of no return. There will be no more humanity; Earth will be an eternal torture chamber at worst, just empty of human agency at best, at any rate a boot stamping on a human face, forever. You would just need a way to give someone a really lucid dream, getting tortured for a few weeks in a dictatorship. Upon waking up, they will realize they need a few tools, privacy and human rights that apply to all being among them. People who don't have the empathy and imagination to have completed such thought experiments in their youth don't have any blessing to give, so I wouldn't even worry about ways to retrieve it from them.
> Why do people in "tech" outsource their judgement on this topic to people who aren't even in the room, a mythical "average person", every single time the issue comes up?
The same reason they use the "average person" as a straw man in arguments about OS usability and whatnot: deep down they believe that they are better than other people because they know tech stuff. People can sense this too, it's one of the reasons tech people, as a group, aren't well liked.
> Why do people in "tech" outsource their judgement on this topic to people who aren't even in the room
I'm not outsourcing my judgement, I don't use Facebook. But many people do, so they seem to view it as a net benefit, unless Facebook has literally started to strap people to their chairs and force them onto the app. You can argue that they are wrong and shouldn't value "connecting with friends and seeing memes" over privacy, but that's a different discussion.
You're conflating rights issues with a choice. You can choose to use Facebook just as you can choose to drink coffee, eat lots of sugar or live a sedentary life style. Sure, let's talk about free will, but that's completely unlike sexual self-determination, slavery, segregation etc.
> You figure out what you consider to be right, and then you fight for it.
No, I don't. I figure it's right to not drink alcohol, but I don't fight so that nobody may drink alcohol. I just don't drink it, but it's fine if you do. Be their free will or not, I'm not forcing anybody to live by my standards.
The point that the people who argue against privacy are liars which will abuse fears to try to get people on their side was missing so far. Thanks for adding it.
> "We need to record your likes to show you more stuff you like"
> The point that the people who argue against privacy are liars which will abuse fears to try to get people on their side was missing so far. Thanks for adding it.
Do you honestly believe that CCTV is completely useless wrt crime prevention and detection?
Counter 2: I don't trust you doesn't mean I trust no one. I wouldn't give you 20k to watch either, but I would give that much to a bank without a second thought.
The point about number two is that you don't get to choose who gets the information. Privacy means you choose what information about you is shared and with whom. An invasion of privacy is taking that control away.
1. Mark Zuckerberg has a much larger risk of negative privacy consequences than the average person. Also, this was probably done more for physical safety than for privacy.
2. If all of people’s emails we’re getting forwarded to random other people they encounter on online forums like HN, then people would be upset.
The point is that people don’t get terribly upset at extremely small risks of minor negative consequences from privacy issues.
These counterarguments are pretty weak, and are essentially:
Oh, so you don’t worry about getting struck by lightning? Then why don’t you go out in a field holding a lightning rod during a lightning storm? crickets
> The point is that people don’t get terribly upset at extremely small risks of minor negative consequences from privacy issues.
No, they don't get terribly upset because they don't (1) understand the potential consequences well enough and (2) it doesn't affect them until it does. We have a very clear mental model of what happens if we get struck by lightning (I go to the hospital and my skin gets charred) but we don't for data privacy.
Huh? Not sure what privacy policies have to do with this. I'm talking about the mental model for the consequences of giving your privacy up. The mental model for the getting hit by a lightning strike is pretty clear for most people...fire = physical hurt...giving privacy up? Not so much.
Even five year olds are taught not to talk to strangers and not to tell them anything.
But somehow there seems to be a cognitive dissonance, where people are more than happy to share absolutely everything to anyone, just because it's happening in comfort and privacy of their home, while typing into a device they own.
It's terrible advice to five year olds though. Most people are decent, nothing bad will come of talking to strangers in the overwhelming majority of cases (particularly if it's someone you approach rather than someone who approaches you). Whereas we've had children e.g. come close to dying of exposure because they'd been taught not to talk to strangers.
There is no convincing argument. The "I have nothing to hide" defense is one that can't be toppled until they experience something that illuminates it for them (something I haven't seen happen), argument/discussion is worthless. My family, friends, and co-workers (developers) all use this. They do not care, and those of us who do are, to them, weirdly concerned about something trivial in their mind.
This is why I think most people don't care about privacy issues. They may run into all kinds of situations where they are screwed over because of the data companies have collected from them. Their health insurance rates might go up, they might not get a call back from a potential employer, they might even have their identity stolen but they won't know any of that was caused by the data a company took from them that was eventually sold or leaked to others.
If they don't know when their data is used against them, or how, or by who it doesn't change the fact that they are worse off for it, but they don't have the ability to connect the dots and see the problem.
The best response to this is an overt question to do something that massively invades privacy. Ex., "Can I watch you have sex if you have nothing to hide?". The obvious and immediate answer is, "Of course not." which opens the door to a dialogue about why privacy is important regardless of whether or not you have something to hide.
Nah. That doesn't work. They just say "nobody is watching me shower/have sex/etc" and that pretty much kills it. Until someone actually watches them have sex and they know about it, they won't care.
There have been a lot of data breaches in companies. If a company has information about you there is a decent chance it will become public sometime in the future. Google knows who you are and everything you've searched for and almost every website you've looked at. Facebook knows who you are and every picture and profile that you've ever looked at. Someone could leak this information and then everyone will know what a horrid little pervert you are.
I'm not sure 5 year olds share the same fears as adults so think of it more as an explain like I'm 15.
If everyone's information is leaked, and it's obvious that everyone is a pervert in some way, maybe we won't need to pretend and lie about that anymore?
You can't think of any reason why someone might not want everyone they know to be aware of what they're searching for?
Do you think someone's who's been sexually assaulted and tried to search for resources online would like their coworkers to find out about it? Do you think someone with HIV, or depression, wants everyone in the world to know? What if someone is in an abusive relationship and they reach out for help online and their abuser finds out before they can safely get out? What if you have some teenager who's homosexual or atheist in a conservative religious community and people find out through their search history - they could be killed, or cut off from their family.
I could go on and on. At some point in their life almost everyone is going to run into something that they need to educate themselves about and that they don't want the whole world to know about. What's your suggestion, that they go to a library and try to check out a book about it?
I know people who have felt shame at being asexual and deep relief that their spouse accepts them as such. I don’t understand that shame, but my non-comprehension doesn’t invalidate it.
Conversely, I know people who grew up hiding that they were bi, in one case because they grew up in the Middle East. That fear is much more comprehensible, as it could’ve literally killed them.
If I like midget porn, or have a foot fetish, why exactly should I have to suppress that? It's not illegal, and it's not immoral, but I'd still prefer my coworkers not be aware of the specifics of how and why I get my rocks off.
"I have nothing to hide" has got to be the single worst argument I have ever come across. If you think about the implications here for even 5 seconds you'd see what a terrible, shortsighted, untenable argument this is.
Freedom and Privacy are tightly coupled. One cannot truly be free if one's every action is monitored or manipulated. That is why the right to privacy was recognized by the US Supreme Court as inferred from the First, Fourth, and Fourteenth Amendments even though privacy was never mentioned in the Constitution.
Let's say your wife is looking for a job, but you plan on having kids some time in the future. That data is collected by either facebook, whatsapp, your mobile phone provider or something else. Or her data collecting menstrual calendar isa sending back info (yes this actually exists).
Now, it's illegal to ask that in an inteview, but the boss might buy that data, and your wife will never get a job.
Let's say you are a supporter of a political movement that is not mainstream, all of that data can be used to exclude you from society one small step at a time.
Would Martin Luther King have enough privacy in 2019 to organize the civil rights movement? Would Ghandi have enough privacy to lead the Indian independence movement?
I hope the answer is still yes, but once the answer becomes no then everyone cares.
But would they need privacy, or could they use open communications instead? It's one thing when you know peoples rights are being violated somewhere, it's another if you can see that live on youtube.
I think if anything current tech would make civil rights movements easier. My example is last years revolution in Armenia. Police knew all of the initial group of protesters, and tapped all their phonecalls, but by decentralizing, by using low risk forms of protests (e.g. honking, blocking the streets by going back and forth on the crossing), by using constant live streams from everywhere, protesters managed to gain and demonstrate support of enough people that the old government had to give up.
I think there's another factor to consider: people recognize that you can't put the cat back in the bag. The modern world just makes it far too easy to track people, find people, spy on people, etc. and you can try regulating that away but the technology doesn't stop existing and, in fact, just keeps getting better and cheaper. In other words: it is a lost cause.
Your trove of personal data has monetary value. With a few pieces of info, your identity can be stolen. If a company gets hacked or an employee siphons data, it's sold and used, with you facing severe consequences and maybe them too
Banks are heavily regulated to protect you from being abused by them. Ad companies not so much, and advertising isn't an occupation that attracts virtuous people.
Most banking regulations don't have to do with abuse of me a person who is putting money in a savings account by the bank. They protect from: The economy failing (FDIC), random (non-bank) companies lying (SEC), "bankers" using non-public information for personal gain in a way that is only harmful due to the zero-sum nature of trades (FTC).
Fiduciary duty is the only thing that comes close to what you might be describing, and well, my bank isn't my financial advisor, and they aren't investing my money for me. As far as I'm aware, they don't actually have a fiduciary duty toward me.
Trust is an emotion, I won't rationalize it for you. You can distract yourself and others with math and rules and sturdy construction, but there's always sidechannels.
I have ~$30K in gold and silver, in plastic boxes hidden under my bed. But you'd need to remove floorboards to access it. I bought for cash at swap meets, years ago.
Twice now, you've over-interpreted something I've said, and this one's particularly reductive. No, my argument is that security is theatrical. What makes you think that my matress is safer than a bank?
> What makes you think that my matress is safer than a bank?
Nothing, hence my confusion.
> No, my argument is that security is theatrical.
Claiming this in a general sense is a strong claim. Are you suggesting that end to end encryption is theatrical? Or by sidechannels do you mean that anyone can come along and beat you with a hose until you reveal your password? You're being very coy, speaking only in vagueties, and that makes it difficult to not overinterpret what you're saying.
In fact you seem self-contradictory, saying that security is theatrical yet appearing to claim a bank is safer than a mattress. Which is it?
Originally, the question was ELI5 regarding the risks involved with privacy. I explained something that one could be concerned about. You attempted to flip that into a judgement regarding banks vs non-banks, and swerved to banks vs mattresses. I'm not here to provide financial advice, but here you are, still trying to pin down an opinion unrelated to my original reply. You're concerned with literal cash money, I'm concerned about personal data.
But since you ask. A fireproof safe is significantly more prudent than an ordinary mattress. And if I had such a thing, I wouldn't be bragging on the internet about where I keep it. Happy?
The point, that you appear to have missed, is that the argument
> Your trove of personal data has monetary value. With a few pieces of info, your identity can be stolen. If a company gets hacked or an employee siphons data, it's sold and used, with you facing severe consequences and maybe them too.
is exactly the same as
> Your trove of actual money has monetary value. With a few tricks, your money can be stolen. If a bank gets robbed or an employee embezzles, it's stolen and disappears, with you facing severe consequences and maybe them too.
Yet we don't appear to have any issues with putting our money in banks.
> You're concerned with literal cash money, I'm concerned about personal data.
No, I'm concerned with things of equivalent value. If I have $1,000,000 in cash, I'm happy to entrust it to a bank. I doubt my personal information is worth that much, yet you appear to suggest that I shouldn't feel comfortable giving a group something much less valuable. In other words, your ELI5 about privacy risks isn't compelling, except to you, who is already concerned about privacy. But you're not the intended audience. The person who isn't concerned about privacy is, and you still haven't done anything to convince them, because banks.
Here's one argument against what I've said, as an example: information is not like money. Its very difficult, and often impossible, to "take it back". So we should bias away from doing so.
> The point, that you appear to have missed, is that the argument
Excuse me, but your approach to this conversation has been rather uncharitable. The assumption, that you omitted from earlier replies, is
>> Your trove of personal data has monetary value...
> is exactly the same as
>> Your trove of actual money has monetary value...
Which is factually incorrect, which is why I was unable to read your mind in this particular regard. Rather than present your concerns with an omission in my answer in a clear and constructive manner, you challenged me to defend a position that I have zero interest in taking. The latter half of your response here is clearer, so thank you for taking the time to explain your concerns.
> The person who isn't concerned about privacy is, and you still haven't done anything to convince them, because banks.
That's the conclusion you've jumped to. Next time, this could be rephrased without the implication that I'm an idiot for not jumping to the same conclusion. A more generous approach to this conversation could look like, "Can you explain how data stored by an online company differs from money held by a bank? I don't think your argument is adequately differentiates the two."
As you note, "information is not like money." What I presented was a process by which the monetary value of information can induce people to steal it. As you note, money is fungible, and data is extremely personal -- so these assets are not exactly the same.
With regards to banks. A bank robber does not have the power to empty your account -- only the bank's cash on hand. This isn't the wild west, banks are insured, and cash on hand is an insignificant fraction of a bank's holdings. If an employee embezzles, there is a remote chance that your personal account will be targeted. However, banks are regulated and these actions will be traceable and correctable -- you might be broke for a few months, but you'll almost certainly get your money back. As we have both mentioned, banks are not without risk -- but there is significant infrastructure in place to mitigate that risk. For instance: if you have $1m in cash, you might want to proceed with extra diligence when entrusting it to a bank -- FDIC only insures deposits up to $250k.
As you note, there's no take-backs when your data is stolen or scummed. There's a possibility that your livelihood will be ruined, and that loss can actually exceed $1m. Your identity is not insured in the same manner as bank deposits.
Companies which peddle personal data have almost zero regulation -- the actions of employees and hackers may be traceable in some cases, so they might face punishment. But that punishment will not remediate the damages you'll face -- and if they are caught, you can try to sue them, but you likely won't receive even a fraction of the damages.
> Ok, now explain why I should trust a bank with my money, but not $any-internet-company with my personal info.
Instead of a good faith response, like, for example "money and personal information differ, here's why", you said
> Trust is an emotion, I won't rationalize it for you. You can distract yourself and others with math and rules and sturdy construction, but there's always sidechannels.
Which is mostly meaningless to me, and I'm quite confident completely meaningless to someone who isn't familiar with security.
> What's your most convincing argument for why an average person should care more? Explain it to me like I'm 5.
The data leak of HIV patients' identities which happened in the UK a couple of years ago. Think you'd struggle to find anyone who thought "Oh, that's not a problem - noone really needs privacy". I think most people also understand that there should be general enough rules to cover all situations like that.
The challenge in my view isn't explaining why privacy is good and needed. It's convincing them that their privacy is actually at risk.
I would also add that some of the most compelling rationale, at least in the US, would likely cite oppressive regimes that seem distant geographically and historically.
Thats the clue to answer follow up question of 'why would anyone?' (not famous, not exactly a model material etc, wouldnt make any money out of it). Memes, for the memes mum/sister/aunt/sister-in-law. In the olden days finding your friends mum on guesshermuff blog was all the rage.
I think the average person cares about privacy either in a localised context or a pathological context.
They don't want their neighbours to see inside their house, they don't want all their friends know what some of their friends know about them, and they might not want their spouses to know what sort of porn they watch. Simple enough. They also understand they don't want to live in a DDR era surveillance by a central government, one that is obviously a bad and violent actor.
But tracking of data on the internet and refining personal information to build a profile, an identity is sort of impersonal. It's not local or pathological. There is no such context for privacy (or the lack of it).
The person doesn't have a relationship towards these unknown surveying and tracking parties and, conversely, they really don't know these people in the social sense. People might realise that their social media posts, shopping habits, and browsing habits might be collected under a profile somewhere but for the most part that's just a bunch of data somewhere: there's no evidence that this profile would change the behaviour of other people who are relevant in the person's life.
Being a programmer, I have to admit I don't know myself how bad things could turn, for instance, in my own case. For the most part, even I don't see any immediate threats with Google or Facebook having some bits of my personal life stored online or Amazon having a good history of what sort of things I've so far needed. What keeps me from being liberal in sharing stuff is merely my nerd instinct: I just know I don't want to expose too much surface to the outside unnecessarily. Hence incognito windows, Firefox containers, ad blockers, uBlock, and other stuff. But fifteen years of Google and ten years of Facebook have not really had any unwanted influence in my real (i.e. physical) life. I also grew up in an internet where pseudonyms were the norm so it's always an exception to give a service my real name.
Data breach and identity theft? The risk is fairly low and the latter happens even without breaches into online personal data.
Custom pricing? I hunt for prices in private windows so that I'm less likely to trigger an interest for an item, and potentially higher prices, in the browser that has links to my personal profile.
Sensitive information? I don't think my blood test results or list of surgical operations would stand out much: surely I have things in that area I don't want all my friends or relatives to know but worrying about that would have to mean the information I have stored on Google and some medical providers' profiles would have to become publicly accessible and my friends or relatives would have to know to look for them and then identify my records to find out. That's a long bridge to be built, I don't think it's a particularly realistic worry.
Do I have something to hide? As much as I hate that argument and while there surely are things I don't advertise and my friends don't know about they could just ask and I'd probably tell them. If a stranger sat next to me and told me something about my life I thought only I knew is both unexpected and a bit hypothetical. The privacy would now have become local and I'd probably be more interested in the social context, i.e. what would the stranger want to approach me like that and what would be the price. On the other hand, if all he has is information then all he can do is reveal it. I don't think any of that information would be detrimental. Uncomfortable, maybe, but not the end of the world. And I'll be dead anyway in fifty years.
Connectivity surveillance? I'm not particularly worried that my ISP or government will see who I've called. They aren't interested in me personally. My frieds might be but the chance of them getting access to that information is slim. Again I don't like the idea of logging connectivity but I personally don't see the downsides either.
Most of the stuff online is also quite transient. The value of me buying a particular tool ten years ago is only so much. I'll block ads anyway.
I am perhaps an average person in that sense. I know there's a strong sense of "not right" in collecting so much information, and as a programmer I know how data can be massaged into information but I can't see credible enough threats on a personal level in that area to change much of even my own behaviour.
I'm not surprised that average people aren't all freaked out about tracking and online privacy issues.
The holocaust was in many ways enabled by mass surveillance. How about we start there. It was done mostly manually, but even something as trivial as census data from an occupied country likely killed an unimaginable amount of people. Then of course you had the eastern block where the authorities would've been wetting their pants at the surveillance capability of a modern private individual, let alone a corp or government. These are not theoretical questions, we've played this game before and now we have really big toys.
I hate the "Holocaust census data" argument because presented like this, it can be used to demonize pretty much anything. Yes, Nazis used census data because it was available. Were it not, they would've used something else. When a band of monsters with resources of a country wants to get you, they will get you. You have to consider the possibility of it, but in times of peace, it's no way for living. Census data isn't collected in anticipation of future monsters in power - it's collected to provide immediate benefit to citizens.
The right answer is not "this dataset shouldn't exist" - it's "what benefits does this data give us, and do those benefits outweigh the risk of abuse?".
The ELI5 I always tend to fall back on is the example of the Dutch Jewry. The tale goes that The Netherlands had a beautiful list of all its citizens, their names, addresses and their religious statusses neatly tucked away in a locked filing cabinet behind a locked door in the town hall.
Come 1940 the Nazis had a beautiful "leaked database" to use as a reference.
well nowadays we have better tech for that. you can simply "infer" someone's preferences/values according to their social graph/spending graph(and trivially)
This is easily obtainable information for any regulatory type oversight government level agency(DEA/ICE in the US) if the appropriate presidential order goes through
I'm not sure a five years old child would even understand what or who a Jew is, or where Netherlands is (the ELI5 meme needs to die a fiery death), but otherwise, a very good example.
Example of what? Is everything Nazis used supposed to be banned? Weapons, food, water, computers, trains?
Doesn’t seem like “Nazis used it, therefore it’s bad” is a valid argument. I agree that the government having a list of personal preferences for no reason is bad, but census data is useful for other reasons, and not having it isn’t going to stop other committed groups of people from doing bad things.
I agree that it's shocking to see how indifferent people are, but I think an even bigger issue is the detriment a lack of privacy might have for mental health. For a lot of people (myself included), there's a visceral repulsion to having other people know things about you that you didn't consent to sharing. It's even worse when there are multiple other entities profiting from that.
There's also the fact that so many people who have grown up posting anything and everything don't have an easy way to easily remove things that they've posted. While there's a lot of interesting stuff on the right to be forgotten in the EU, I think we're really lagging behind in the US. So far, only California is making some forward strides. [0]
the problem is that it is hard to figure out what to do about it.
i can't avoid public cameras.
should i run around with a full-face mask? (illegal in austria).
refuse to go outside? (not healty, and hardly possible for most without risking your job)
move to the countryside? (away from the crowd?)
seriously? what can the average person do against all this privacy invasion except for burying their head in the sand?
the only viable action is education. as you say, talk about the problem, spread awareness, until the majority of the population is better informed. and even that looks futile (though i believe it isn't) everything else is hopeless.
No, the "problem" is that it's not even an issue. If the average person wanted to avoid being recorded in public, then we would find ways to. But there's no point "spreading awareness" if you don't have a convincing explanation for what exactly the actual problem is.
Here we have some of the brightest and most privacy-aware people in the world trying to come up with ways to convince family and friends that privacy is important. And still, I can't see a single compelling argument that would actually work or even make my friends and family think twice about it.
The only thing that will make politicians sit up and take notice of privacy issues is for them to experience first-hand what loss of control of their private data does to someone's life - identity theft, poor decisions or benign mistakes catching up with you/being re-interpretted, merciless judgment from others, relentless targeted advertising... Especially since politicians are notoriously cagey about their private lives, this sort of experience would be quite an eye-opener.
"I continue to be amazed by the level of nearly complete indifference that the general populace still has towards privacy."
Where should it fit on their list of concerns? Let's say the average person is struggling to fit their current life into their day, to add their aspirations and bucket lists. Then they have concerns or arguments about the political landscape or the climate or they're worried about job security. Or they're worried about immigration one way or another. Or about security of themselves and their property. About the future for their children. Personal health concerns. When are they going to have time to go through a seemingly distant issue which isn't impacting them right now and doesn't really have an analog in basic instincts?
> I don't think the politicians are going to do much of consequence until their constituents think it is important.
My fear is that most politicians don't often seem to do much about any given issue even what their constituents do think it is important. I would guess that when there isn't much campaign financing/donor money behind any particular issue, even if it has popular support, then most politicians feel relatively safe ignoring it.
Certainly some politicians care about average voters' issues. A long time ago, I worked for one who did. But most seem like they don't. Most seem like they care mainly about their own pet issues, or merely getting re-elected as many times as they can.
Or I am just getting more cynical as I get older. I guess that could be as well.
I totally agree. I think part of the problem is finding a specific case that applies to as many people as possible (a common denominator), but also elicits an extreme visceral reaction so that it easy to empathize with.
There was a comment that I read here on HN about privacy being something like "everyone knows what you do in the bathroom, but you still don't want anyone to see", but I can't find it using https://hn.algolia.com/?query=&sort=byPopularity&prefix&page... . Maybe it was reddit?
People are concerned. They just don't understand what's what. This is not an easy issue to grok.
A friend is writing a book about privacy, surveillance. I had done some policy work (voting, medical records). He solicited my input. I shared my conclusions. What I was saying didn't match his preconceived notions, so he blew me off.
Some time later, friend contacts me, apologizes.
Him: "You tried to tell me, I just didn't get it yet."
Me: "Yup. Me neither at first. This issue is an ass kicker."
today my wife did a search and found her info on a site called mylife<dot com>. She was seriously creeped out.
I did a little research into that site. I hope they scrape more data on everybody, so when the public finally becomes aware of it it'll finally bring some sane regulations to all this
That reminds me of this article where authorities deemed to OK to go through peoples trash... so Journalists went through those same authorities trash:
That's just because tea drinking is currently socially acceptable. As examples, in a western context: what if it wasn't tea bags but used condoms? In a middle-eastern context: empty beer bottles?
Worse, you can't predict what will happen to society and its rules, up to the point of "normal" stuff becoming illegal. Take as an example the (new) laws on pornography in the UK. What if have an under-18s minor in your house and your trashbags are full of pornmags?
Dave Brin's answer: privacy is lost, but we should make sure that it is lost by all equally. We must eliminate all avenues that grant privacy to any class that is otherwise favored.
Is it personal if it can be inferred? When your water is used, electricity is used, internet is used, vehicle is used, all require use of infrastructure belonging to a second party, therefore the information from it is not just seen by you.
My impulsive response is that any data from which identity can be inferred is PII.
I'm not smart enough to understand the maths of differential privacy. I gather that it's a calculus for determining how much to fuzz the data (points) to anonymize. So car rentals may need to decrease the accuracy of both timestamps and locations to create hash collisions with other rentals.
--
Maybe there's a way to create logical "data diodes", for lack of better term, for the notion of data flows and relationships go only one way. So for electricity, each meter has GUID, which is never referenced directly, meaning any external reference (foreign key) references an opaque identifier which can then be dereferenced within the system. Then anything referencing the meter will be issued its own opaque identifier, so that no two references can be linked from outside the system.
I'll have to dig out my copy of Translucent Databases to see if I'm making this up or repeating something that author had already thought through.
You have a point but somehow that just makes you the bad one. Perhaps if you exposed everyone's data but then they call it malware/virus/hacker group/rogue nation/terrorist.
All that happens is scary privacy themed movies get made.
Someone needs to reinvent services with privacy in mind and wait for the next generation.
"And until lawmakers, corporate leaders and citizens embrace obscurity and move to protect it, your freedom and opportunities to flourish will be in jeopardy."
True! Look no further than NYT's initiatives to collect and sell their readerbase's emotional engagement to their content.[0] From their very own publisher:
"People have little transparency into what is being gathered, where it’s being shared and how it’s being used — to follow their movements, to charge them more for health insurance or to manipulate them with political messages — and even less agency to do anything about it." Mr. Sulzberger, publisher of The New York Times [1]
I don't mean to detract from the essay, but I'm pretty skeptical of NYT's "Privacy Project". NYT is the only publication that I read frequently that aggressively tries to prevent me from browsing in private mode with Firefox Focus and I wouldn't be surprised if they collect and sell the most reader information out of all major US publishers. I can only posit that though because they don't disclose any of that information. NYT's attempts at "data transparency"[1] are just excuses justifying the sale of reader information instead of answers regarding what information they collect, what information they sell, and who they sell this information to.
How long until NYT writers start facing pressure to produce stories that will maximize "emotional engagement", even if they're divisive and inflammatory? How long until emotional engagement is inadvertently used to discriminate against certain groups based on their reactions to controversial topics? These outcomes are an inevitability when profit motives are involved because humans are incapable of exercising restraint and saying "That's enough information, we don't need to collect any more or expand our current profit levels." NYT is repeating the same mistakes they're currently criticizing tech companies for and, as usual, the individual will suffer.
About the only way to get actual privacy is: individuals valuing privacy, individuals having the tools to protect their privacy, and states not actively removing those tools (which can mostly be prevented by protest and refusal).
This isn't I think this will happen. Every indication is that you won't see individuals caring about privacy. Every day the attitude that there's something insincere about not being online with your real name seems to get more traction, along with the belief that "sharing your life with your friends" can somehow now be compatible with privacy.
It's just that individual is about the only way this could happen. Because no institution wants to do more than protect your privacy from everyone else.
I understand your point and I agree that individuals need to value privacy, but I also think it's important to recognize that the average individual on HN is much more technologically literate than most. I would posit that most people do care about their privacy, but that most people aren't aware how exposed they really are. You could make the argument that being aware about what information you're exposing is the threshold for caring about your privacy, but the reality is that it's impossible to cover all of your bases and tech companies, and now NYT, are fully exploiting that to their advantage. The average individual isn't aware of how much specific information is collected when they use a credit card, when they connect to a cellular network, when they open a webpage with trackers, etc. Hell, Mr. Sulzberger in [1] admitted himself that NYT isn't even fully aware of what happens to the reader information they sell and whether it ends up in responsible hands.
It's exhausting and impossible to care about privacy today. You can care about privacy today and still end up with all of the same information exposed as someone who couldn't care less. The responsibility doesn't fall on the individual, it falls on the companies that continuously scrape every piece of metadata possible. That's why I can't take NYT's Privacy Project seriously, because in [1] they justify their data collection methods by essentially saying "Well, everyone else is doing it, too! Look at these other publications and companies that also collect data!" instead of taking the opportunity to be transparent and setting a standard. It's disappointing and makes me question the motives behind the project.
I would posit that most people do care about their privacy, but that most people aren't aware how exposed they really are.
Indeed, well. The average person cares about privacy but they don't understand privacy. I mean, it various ways I probably don't fully understand privacy.
Which is to say, on reflection, that privacy and security are fundamentally intertwined just all the various points-of-entry for the bad guys are intertwined.
And the thing with security is that it isn't a feature and an add-on. In an organization, everyone has to care or no can. And with individuals, the same. A given individual can't expect just AV to protect their security and they can't expect a website by itself to protect their privacy.
Where, as I said above, I'm not optimistic on all these concerns.
I did submit a variant of the comment (to fit their character requirements), but I don't think it got approved because quite a few new comments have been posted since then. Perhaps they have a particularly long backlog, though; it's only been about two hours so I don't want to jump to any conclusions yet.
Edit: Given the large stream of comments that have now been posted on the article I think I can safely conclude mine didn't pass the manual approval process. So much for "welcoming on-topic commentary, criticism and expertise."
The business side of a big news organisation is slow and sprawling, and intentionally separated from the editorial side. It’s a good thing that editorial teams are willing to question and debate the ethics of common business practices even when that means criticising their own respective corp/product/tech departments. Progress has to start somewhere.
They are not “collecting” emotional reactions. Reading comprehension: they used a training set to predict emotions, and sell ads based solely on content.
Seisent, ChoicePoint, others were tracking everyone in North America by at least 2005, with partial coverage for Central America, South America, Caribbean. Just from public records, which at the time was ~1600 data feeds. Seisent's sales person told us the NSA bought one of their clusters and included their own data feeds (communications, transactions).
I assume by now multiple parties, both private and governmental, know everything about everyone, living and dead, in near real time.
And that's the reason we need the EU to make more laws like GDPR and the "right to be forgotten".
The US is controlled by big business, so there is no realistic hope that the US will make laws to protect our privacy from corporations, but the EU works for its citizens.
Data protection laws need to basically create this sort of ephemerality in collected data in that data is destroyed by law after a certain period of time. Outside of that we're a step away from a total surveillance state where everything that you do in any major city outside of your house is on record for eternity.
Here Louis Rossman talks about how all recordings that he has made using his Android phone's voice-to-text feature and Google recording all that audio for years and all of that is on record under his account.
So in terms of storage, if Google is storing that level of data. Then yeah we have the ability to store everything, forever. If that's what we as a society want to do.
I guess I'd argue that we as a society don't really want to do that, and if we as a society paid attention to privacy/security more then we'd force the companies doing this to stop. Or we'd force our government to pass laws that would stop it. But we as a society only have a limited number of hours in our day, and most of society just doesn't care enough to apply pressure where it would make changes. I think most people just don't see why it matters.
Those things are happening though. The GDPR has only recently come in and the average person is starting to realize how data collection affects them in ways like manipulating elections and data leaks. Until recently these privacy intrusions have only been a theoretical risk and most people don't care about theoretical risks until they become real risks.
This is not an expectation of privacy. This is a negative liberty from constant surveillance in public. This has something that has implicitly existed forever. Perhaps it needs to be explicitly stated now. And here I am not even advocating for that freedom, but simply for that freedom to be ephemeral in retrospect. Say after 2-10 years the data is destroyed.
Hoarding is keeping worthless stuff without a plan for what to do with it. Google, OTOH, has a solid plan for what to do with this voice data. (Use it to train better models that do a better job of recognizing your voice.)
Future historians don't care about who was in line in the bakery this morning etc. Data that should legitimately be obscure is about personal/small scale everyday actions that could be identifying given a small sample. I'm the first one to fall into data hoarding, but we need to repeatedly acknowledge that most of the things we do are not worth being recorded, and actively not recording them (or making them easily accessible, standardized) does have a lot of value, in terms of social structure.
>Future historians don't care about who was in line in the bakery this morning etc.
They do when it turns out that a serial killer was in that line. Tracking exactly what that killer did leading up to the killing spree would be very interesting to them.
But it's very hard to track 1 person retroactively. Unless, of course, there's a nation-wide surveillance system in place.
I'm not advocating for it. I'm just saying that the parent is correct in that historians would love this information.
We will be lucky to have any future historians at all.
We're living in the middle of a dark age anyway. All our precious digital media are extremely fragile both in their capacity to be preserved, and in future generations' ability to decode them even if they are preserved. If you want historians to see something, write it on a physical object, in multiple languages.
It's hard to imagine a scenario where future historians can't even translate 2019 English. That's either an apocalypse layered over an apocalypse, or a length of time beyond any human comprehension - tens of thousands of years - of dark ages. You need to be carving in stone or baked clay in the desert to hit time spans that long.
That's fair, English is probably fine. But you'd be hard-pressed to convince me all our massive data centers are going to survive. I don't think that will even require a civilizational collapse, just a company going out of business. Imagine what a full-scale depression would do. And the drive to push everything into the cloud has only gotten started. Just look what happened with Tumblr banning adult content and blocking people from making archives. It's already getting difficult to find hardware that can read old media, and the attacks on public library funding don't help either. We're building a Tower of Babel and storing all our knowledge inside.
Yeah, but data is like... we could loose 99% of it and be just fine. With the remaining 1% the most important tech/science knowledge and a fair chunk of culture could be interpolated.
I have a huge treasure trove of my art, music, and writing from my childhood stored in stuffit archives on a Jaz drive. How much will you charge to retrieve them for me?
Now that privacy is gone, I'm surprised that we haven't seen this technology used to unfortunately tarnish people's reputations. I don't want to quite spell it out.
I fear that scandals could retroactively breakout: you're no longer safe from your past. Old public recordings could be used to frame nearly anyone in unfavorable light.
Last year one of my wife's uncles had a DNA test done, and he turned out to have a sibling match with someone he had never met (living in a different country). Turns out that her grandfather had a child from a relationship with someone he met before her grandmother, and none of the family had ever known
I really think at this point that it would be more effective for "privacy advocates" to switch gears. Trying to end the war on drugs, legalize prostitution, make the culture more permissive, etc. actually has a chance of succeeding while privacy has only been losing ground for decades at this point.
Miniaturized, cheap, low-power electronics and ubiquitous wireless Internet, plus modern image and audio process/recognition, are the end of it, really. It's over. War's lost as long as those exist, and good luck getting rid of them. Cell phones and giving all our data to 3rd parties who can track us without restriction are making it much worse, but the other tools are there, now. We built the ultimate totalitarian toolkit, and a massive disinformation machine that leaves us even less certain what's "grassroots" and what's enemy action than we ever have been.
We split the privacy atom and no non-proliferation treaty equivalent is anywhere near the Overton window, nor likely to be any time soon.
Online, at least. In meatspace, not so much. So you just gotta stay present to that fact that you're under observation at all times. Potentially, at least.
The author may know of a great deal of harm caused, but so far I remain about ten pole lengths away from panic over my privacy. My Life did not seem to have much, and I have found more on myself in the past by simply googling. Yeah, I don’t like how easy it is for people to see my mostly measly political donations, but only a stalker would give a darn about the facade of my home. Seems like it is more likely we need to fear identity theft than a lack of obscurity. Here’s an idea, if you want more privacy/obscurity, refrain from social media or using your real name in comments sections.
Initially Barack Obama was in favor of homosexual rights, but not in favor of altering the institution of marriage. Scruton continues on, "within a year he was saying I am in favor of gay marriage and within another year he was joining forces with those who say that those who are against gay marriage are homophobes."
"So something which had been judged perfectly legitimate to oppose suddenly becomes the criterion of moral merit. And I think we've seen this in all sorts of issues that we've gone through in recent years. All of us are within one tiny step of being demonized."
What all of us are witnessing and living through right now is data in its infancy. There is no one to nurture it and help it develop and grow. Only us - to learn as it grows. To change direction on its growth. I wish I would be alive when it truly becomes what it becomes. My grandkids will likely not even be alive. And I don’t have grandkids yet.
There are people out there slurping up all the information they can find in the public and attempting to connect it together. If you want to see what I mean, go check out your mylife.com profile. Mine has a big red warning because I have neighbors with court records. It says I "may" have sex offenses. What the actual fuck? This company needs to eat shit.
This isn't about people tracking you to give you better ads. Its about people tracking you to sell information to whoever wants to pay them. Its way fucking worse.