I probably should keep my mouth shut, but after seeing so many posts about how great Path is for giving such a genuine and heartfelt "mea culpa", I can't help it. A friend of mine did some work for an older version of Path.com which included an installable desktop client. One of the key features in the spec was the ability to grab the users entire address book without ever letting them know what was happening (e.g. no alerts or confirmation). This behavior wasn't a mistake or an oversight, it was completely intentional from the beginning. Of course my friend thought this was a bit shady, but the truth is that shady tactics are used all the time in the software industry for one reason - because they make money.
When Path states they didn't realize users would feel deceived and that they only intended to use the information to make better suggestions for the user's contact list, well, I don't want to sound cynical, but I think anyone who blindly believes these kinds of statements (from Path, Facebook, or any other company) is either personally/financially interested or extremely naive.
It's very clear it has all been with the intention of deception.
Gawker's Ryan Tate reports that he specifically confronted the CEO Dave Morin about the address stealing behavior last year and he officially denied that they were doing it.
The update section in that link makes it even worse. Mr. Morin is stating that technically he was right because Path.com wasn't collecting data at the time he was interviewed. Which means this was a calculated move to add this "feature" at the behest of a something (a VC or an itch to increase the feature-set maybe?). A facing saving gesture at that point might have been a blog post detailing the change in privacy policy/ToS. An apology now means little to me.
You attribute malice to where there probably is none.
I reckon the more likely truth is that the Path folk genuinely saw nothing wrong with what they were doing. There are a group of people (call them the Facebook crowd, if you want) who think that excessive privacy is unnecessary, extending that thought to sharing people's address books.
> You attribute malice to where there probably is none.
No one ever says "I'll be evil". Instead, they come up with ways to justify the evil that they do.
The only reason for giving someone who "meant well" a pass is if it's likely that the bad outcome was fairly unexpected. If, as is usually the case, the bad outcome was likely, they should be held accountable for intending it, just as we do with drunk drivers. No, "meaning well" isn't an excuse for ignoring reality.
Remember, most of the world's horrors are caused by folks who claim that they're trying to do good.
There is no ill goodwill in his assessment. There is no way around it. Path is just like anything other company. They are in business to make money. No different than Google. No different than Facebook. No different than Apple. How they go about making money and the moral choices they make vary greatly. Whether they genuinely thought what they did was wrong or not doesn't make it right. They perfectly well knew what they were doing. Just like management at Sanlu knew what they were doing when they added melamine to the baby milk formula. The decisions were made with the bottom line in mind.
The previous replies do a good job of addressing your post, so I won't repeat what's already been said.
The only thing I want to point out is that the in the specification I mentioned in my original post, the requirement was to circumvent the operating system's built-in warning to the user when their contacts were being accessed. Why go through the trouble to do this except to consciously deprive the user of the ability to say no? iOS doesn't have such a warning (and therefore I don't think the current issue is equally as egregious as this), but I think a lot of people are starting to wish it did.
I really hope the furor over this causes Apple to require permission to access the address book. I understand Apple is trying to keep things simple, but this is, IMO, as important as location information.
Asking for permission to access the address book won't help, though. People will just go right on through--it's a social network, of course it needs your contacts, so on and so forth.
The intractable problem is there's no way to verify what's being done with that data once it's accessed.
I hope the furor over this causes other companies to pay more attention their users' data and privacy. More importantly, to treat their users as people instead of end products.
> "The answer isn't for ... the company ... to prove that they can be trusted; the answer is to ensure that their customers don't need to trust them. ... The best way to avoid privacy breaches is not to formulate a detailed privacy policy; it's to reduce your capabilities so that you're unable to violate anyone's privacy."
Just think of what target companies like this will make for hackers. It's an accident waiting to happen.
It'll be a shitstorm when a large set of address books gets leaked to the internet (à la AnonOps, etc.).
(Though it would be a lot of fun to run some graph-theoretic metrics on the dataset (closeness, centrality, etc.). I've often lusted over getting an anonymized version of the Facebook graph (32-bit ID for each person, assume average of 100 friends, 700 million users, gives a total size of about 300 GB uncompressed), but a leak of a couple million address books now seems not far-fetched.)
what worries me, and is related to the issues described in that article, is that the various examples i've seen don't seem to be secure. the path loading was via https/tls but seemed to be (from the description given at the time) vulnerable to a simple mitm attack. that means that the server certificate was not being validated correctly. another example (can't remember the company) was using http.
in both cases, a clued-up governmental agency could read the data. as the article says, this can be a big fucking deal.
tldr: if you're going to abuse people's privacy, at least do it right.
[it's possible that the tls case involved loading a new trusted certificate onto the phone before the attack; i did check for that when i read the description and couldn't find any mention, but if that's the case then the loading would be secure - although even then, it might have been possible for path to hard code details of which ca they trust.]
Note TrustWave, a widely trusted certificate authority, has recently explicitly acknowledged selling devices with subordinate root certificates that allow them to spoof certs for any domain that they want in order to allow companies to snoop on their employees. http://blog.spiderlabs.com/2012/02/clarifying-the-trustwave-...
If they are willing to sell these to private organizations, who's to say that one of the many certificate authorities have not sold such devices or certificates to repressive governments?
SSL is basically a joke at this point. Because it requires you to trust every single certificate authority for every single domain, it basically means that you are only protected against the average script kiddie, not a dedicated attacker. There have been plenty of CAs compromised, who continued to issue MD5 signed certificates well after it was broken, who are within the control of authoritarian governments, or who are willing to sell subordinate root certs for the purpose of snooping to any company who is willing to pay enough.
And that's not to mention that even if the information is properly protected in transit, putting this information on your servers makes you that much more vulnerable to attackers who may want access to this information, the government of the country in which it is hosted, legal process, or any number of other threats. Even if they had properly secured the information in transit, and SSL were secure, it still vastly increases your customers exposure to upload this information to your servers.
"Lawyers I spoke with said that my address book — which contains my reporting sources at companies and in government — is protected under the First Amendment."
What does this even mean? Does he mean the fourth amendment? If so, lawyers would have told him "not once it leaves your phone". What do you mean by "protected"? That you can't be compelled to divulge it at all? That police need a warrant to get at it? That you can sue if someone else exposes it?
So I flipped the bozo bit. If the author plunks a meaningless but frightening sounding paragraph in the middle of an article, I just don't really put a high value on anything he has to say.
Actually, the issue of whether or not a reporter should have a special right to preserve the anonymity of sources has been a question of First Amendment law since at least the nineteen-seventies:
And, as others have suggested, you might want to flip the "bozo bit" on the entire notion of a "bozo bit", because it's a funny metaphor but a lousy rule for real life. Everybody is a bozo some of the time.
Well if he's flipping the bozo bit instead of setting it, that means you need only be a bozo an even number of times and he won't consider you a bozo. I set the bozo bit of anyone who flips the bozo bit.
Okay, thanks for giving me the connection to the first amendment, but it still means nothing, because Path is not the government. The legal "protection" is about whether the government can force a reporter to divulge his sources. It is not magic fairy dust that imposes storage requirements on third parties.
I don't think disregarding someone's opinion because you've found a significant fault in some of their opinions. I'm not an expert on everything, but then I don't post articles on things I'm not an expert on. If the parts of an article I can can evaluate turn out to be incorrect, then I can reasonably infer that the rest of the article is of similar accuracy, even if I can't personally evaluate it.
> I just don't really put a high value on anything he has to say
Isaac Newton wasted the last decade of his life trying to turn stuff into gold, but his physical model of the universe was still quite correct, at least as a first approximation at the meso-scale (the best you could do at the time). You may find that one statement to be crazy, but I think the author still makes some important points about the flippant attitude toward privacy in the tech industry, which is supported by a thousand data points across dozens, perhaps hundreds of companies.
I'm guessing that he means a reporter's sources are protected under the freedom of the press granted by the first amendment. For a reporter, having his address book exposed could be quite a bit more damaging than for the average user.
Exactly. It's an issue because, if someone asks him for his sources, he can say no: if someone asks Path for his sources (via subpoena), they might not be able to say no. So this upload means he can no longer guarantee sources their name will never come out.
Even given the reporters' privilege, the First Amendment has nothing to do with whether it's legal (or stupid) for Path to be uploading his address book to their servers. Path is not the government.
That means he's not only a sloppy journalist for not explaining his First Amendment point, but he's doubly sloppy for implying that he talked to any lawyers while writing this article. Because if he did, they'd have said "Go check out jaylevitt's excellent comment on HN explaining why that's irrelevant."
Why is the New York Times incapable of writing headlines in one of the variants of English spoken on Earth?
But the New York Times headline writing style is a variety of English, familiar to English-speaking readers of newspapers. As I wrote earlier in response to a similar question,
"Newspapers all over the world use different grammatical conventions in headlines from articles. I read Chinese, and Chinese-language newspapers also have headlines that look quite bizarre in isolation. As the first kind reply here said, this convention probably began to save space for banner headlines in large type."
In general, yes you are right, but NYT headlines are not Standard Headline English. They are bizarro convoluted long-winded show-off try-to-look-clever-by-overcomplicating English.
Singling out Path in this issue is not right. Instagram fixed the same issue in their new release. Ditto with Voxer. And more app updates to come. Its sad that one company is being shown all the heat. The issue is pervasive and the problem is not with the apps but with the platform, iOS. It would make much better sense and workout better if we take the issue to Apple. But I'm sure Apple will close this loophole in the next release. Until then, lets leave the app developers alone.
No, don't leave the app developers alone. Apple has merely failed to protect its customers; it is Path, Instagram, Voxer, and whoever else who have actively taken advantage of this lack of protection to abuse the trust of their users. Yes, Apple could do more to protect their customers, but the fault lies with those people who are uploading address books without permission.
When I install an application on my computer, I do not expect it to upload arbitrary information from my disk to the developer's servers. If an application did, I would be quite upset, even though any application that I run on my computer will generally have access to all of my data with no substantial platform-provided protection.
Why should I suddenly give the developers a break because the application is running on the computer I carry around in my pocket, instead of the computer I put in my lap?
Would you forgive a company if their application grabbed your cookies, and uploaded those to their server, so that they could log into your Gmail account to find your contact information? Decided to upload all of your documents to their servers and convert them to a convenient HTML format to make it easy for you to share them with one click to your friends? Rooted around your hard disk, uploading your tax information to their servers?
So why do you say that we should forgive companies for making the deliberate decision to grab private information from your phone, and upload it to their servers, just because the platform vendor never implemented a feature to explicitly forbid that?
one might argue that by not protecting the user's personal data by default, when how to protect such information is quite well known, the vendor and market maker is clearly the liable party
Would you argue this for your desktop or laptop as well? For any breach due to you installing an application from someone who abused your trust to read your files and upload them, that your OS vendor (Microsoft, Apple, your Linux distro, or whatnot) is the liable party?
It's Apple's fault that developers took advantage of an API to abuse their users' trust and privacy?
No. It's Apple's fault for failing to protect their customers that buy into their walled garden, but even then, there isn't exactly a hugely black and white list of things that Apple considers to be "fair use" and "evil" in privacy contexts.
The app developers CHOSE to use, upload, store and reuse this data, not Apple. Shifting the blame to Apple is making extreme excuses for app developers.
>Until then, lets leave the app developers alone.
I can't begin to wrap my head around this viewpoint. At all. Let's give them a free pass. Despite them violating my implicit privacy, they can't be blamed. If they can, they should be able to, that's what you're saying...
It is not just violating privacy. It is unauthorized access to a computer system, aka malware, aka hacking/cracking.
If you did that to Path it would be a felony and you would go to jail. Why is it not a felony in the other direction?
Because this access is authorized? I would argue that the presence of an API (whose purpose is to provide access to the data they consumed) constitutes "authorized" access. The app doesn't intrude or circumvent the privacy protections of iOS.
On the other hand, if an app was able to forcibly allow itself access to Location Services and ignore the iOS setting, I would argue that is unauthorized.
Granted, I have absolutely no idea what the context of unauthorized is from a purely legal standpoint.
It seems the management philosophy of “ask for forgiveness, not permission” is becoming the “industry best practice.” And based on the response to Mr. Morin, tech executives are even lauded for it.
I am really upset by this. Most executives (I'm looking at you BP) have had very inconsiderate versions of "I'm sorry" that are literred by play on words, media spin, and disgrace.
Human beings are not flawless and I respect the companies (I'm looking at you Facebook, Dropbox, Path, etc) that are willing to treat me like a human being and say they're sorry.
How many times are you willing to hear the words "I'm sorry" from an industry before you get tired of it? Part of life is learning from other people's mistakes but if that learning was actually happening would we be hearing about privacy 'violations' etc from different places?
That's the point. Corporations are not human beings, and they methodically exploit your misplaced compassion for financial gain and your harm. Cf telemarketers for a longstanding example.
I can't believe that this story is still being brought up. The concerns of the users were eventually addressed, data was deleted, and they apologized. What is to report here? This is nothing more than the media making something out of nothing, as always.
It's not like they were using the data for something other than convenience for the user. When the users were upset, they reacted accordingly.
Facebook looks at all your data for targeted ads, and Google uses all your data to refine their algorithms. All Path did was try to use your data to help you, and when they were met with resistance they back tracked on their decision.
Mistakes like this are made all the time, and this isn't even that big of a mistake. It's not like the data was leaked. People need to seriously calm their nerves and look at what Path did right.
Stop looking for a story where there isn't one. The real story is Apple's privacy policies. Path should have been forced to ask for access to the data, but they weren't.
Here's a story for you: There's a journalist from Saudi Arabia that's being threatened with execution for a comment he made on Twitter.
I'm sure you've heard. How would you like to have shown up in his address book? Or should we all rest assured that the all-powerful people at Path would refuse governmental threats?
>All Path did was try to use your data to help you
You don't really buy that...do you? These companies all make money from our data, in some form or another. I understand that Path isn't solely to blame here, but they played the game like everyone else and happened to get caught. I feel no pity.
I'm confused as to how Apple's lack of regard for the privacy of its users is a story, but Path's lack of regard for the privacy of those same users is a non-issue. One of the parties failed to protect the users' data, another one of them took it without asking. Both should be held to account in my mind
What is Apple supposed to do to prevent a social network app (i.e., something plausibly worth granting access to your contact list, and so asking for permission to access your contacts wouldn't help) from uploading your data to their servers?
I think having some sort of permission guard for contacts is totally worth doing, but to put Path's sending of your contacts to a remote server in the same category as Apple not asking before allowing something to see your contacts is misleading at best.
> What is Apple supposed to do to prevent a social network app ... from uploading your data to their servers?
1) Put in their detailed rules that this (uploading entire address books) is not allowed.
2) Remove apps from the Apple App Store if they are found to violate this rule.
Apple could also remove such apps from phones after the fact as if they were hostile malware. This may be going too far, but it can be done:
http://cybernetnews.com/apple-can-remotely-remove-bad-apps-f... I mention this since by saying "What is Apple supposed to do to prevent.." you may be asking if there's anything Apple can do at all. Yes, of course there is. It's not hard to do something when you own the app store and have control over all the devices.
That's not enough. Contacts should not be accessible by third party apps without explicit permission, period. Its not enough to remove apps after they are found to violate a rule. Just don't even make it possible to violate that rule in the first place.
> That's not enough. Contacts should not be accessible by third party apps without explicit permission, period.
Well, that too. But eropple (the parent poster)'s point is that social networking apps are the kind that would typically ask for this permission. Controls on this behaviour before and after the fact can work together.
Now that I think about it, doing the "find your friends" thing without uploading address book data at all would be tricky.
I mostly agree with you, but my issue is that they were not using the hash of the emails. You can still match friends and alert users when a friend joins by only storing the hash of an email. Why store the whole address book unless you want to use it for other reasons? Would path take on the privacy flack and security risks for storing personal data just for friend matching?
When Path states they didn't realize users would feel deceived and that they only intended to use the information to make better suggestions for the user's contact list, well, I don't want to sound cynical, but I think anyone who blindly believes these kinds of statements (from Path, Facebook, or any other company) is either personally/financially interested or extremely naive.