It's mentioned in nested comments, but (as you'd probably expect) meta does not intend to store passwords in plaintext. There was a bug where they were logging plaintext passwords for some period of time e.g., when someone tried to log in etc.,.
And they're not fined for storing in plaintext, nor for storing in plaintext by mistake, they're fined because the law give a time limit for you to notify the regulator after you notice it and they waited too long.
And in this specific case just to be clear it's not about taking too long to notify the public / customer, but about taking too long to notify regulator (the delay is much shorter). And they're not supposed to have perfect facts when they notify it, there is no sanction for notifying and saying "but we're not sure yet" if you're being honest, or coming back later with correction, there is a sanction for not telling them in time.
We often see "companies should be responsible / should have to inform me" and that's part of that regulation, and it only works if there are clear defined delay and sanction when they're not respected.
How long has that even been a regulation and in which countries does it apply? Software engineers are trained to view these kinds of things as bugs. Legal isn't trained to monitor bug trackers.
> Software engineers are trained to view these kinds of things as bugs
Competent engineers —software or other— must have an education in safety standards and legal regulations. I had a pretty formal education in data protection at both A-Level and undergrad. I know real engineers get tetchy about us programmers edging in, so if you want any claim to an engineering title, ignoring the ramifications of your code in the real world is unacceptable.
But that doesn't seem to be the problem here. Somebody did know it was bad, did fix it urgently, did report it internally and did an impact assessment. The problem was they needed to notify the regulator earlier so they knew it could have been a problem.
If these passwords were in the wild, delaying notification by however many days means attackers have more time to use stolen credentials. $100m sounds like a lot but a lot of these regulatory rules scale with the company so that punishments like this have impact. They need to improve how they handle security notification.
I think the big difference is that engineers have had to care about people dying for the past 100 years. Today there are software developers that throw code out into the world that kills people without any repercussions. At some point this needs to change.
It’s part of GDPR. I’ve been given training on it at all (3) companies I’ve worked for and training has always included what constitutes a breach and what to do.
I would hope any company would treat it as an incident rather than just a bug where senior enough folks would be involved to know what their responsibilities are.
> The Irish Data Protection Commission found that the company violated several GDPR rules.
this is why lots of websites block the EU from accessing. You basically need to consult with lawyers to make sure you're not accidentally breaking the law when writing a codebase.
“lots” not really as most companies want accesss to european market.
Also no you dont need to consult lawyers when writing code. You just dont track and save data and do questionable stuff with it. Saving passwords in logs is surely security issue first before its GDPR issue.
yes it's a security issue but you wouldn't "expect" to get fined millions of dollars.
Do I think we should punish companies for storing passwords in plaintext? Yes. Would I expect that a bug and devs untrained in GDPR best practices could lead to fines? No.
Usually in software engineering you don't get your company fined for making terrible mistakes unless you're in a field like finance. This was just passwords which most sites have, not something like PCI DSS stuff
I see this comment pop up here often in these threads about EU fines and regulations. "Apple/company should just call the EUs bluff and stop selling in the EU!".
Apple's a good example because they're such an incredibly global brand, who should be less reliant on EU customers. Yet Europe is responsible for >20% of their revenue. Shareholders would eat you alive for just "nope"ing away from that.
Yes, US GDP/capita is far above the EU average, but the EU still represents 450 million, on average fairly wealthy people. So companies simply play ball. And that excludes the UK, whose data protection laws are similarly strict.
Noping out of europe would create a vacuum that would be filled by a competitor who would fully comply with the consumer protecting EU rules. Rules that many consumers in other nations would love to have themselves but don't because of regulatory capture and regular corruption. Those consumer friendly products would become popular outside of Europe. Big-US-monopoly-company would lose dominance. They would either have to change to be more consumer friendly or eat the loss of market share.
20% of revenue isn't that much. Would you rather focus on your core product and double your revenue, or focus on getting that 20%? Yes giant companies have the resources and experience less YoY growth so they will work with the EU market, but most companies would do better to ignore the EU.
Explain to me how not selling in the EU could double a company's revenue? How removing yourself from your third largest market would enlarge the others? How millions of people in the US don't buy an iPhone because you can buy iPhones in Germany?
see Apple's 14 billion tax troubles in Ireland. For years these companies have been using the European operations as part of the tax avoidance schemes.
Don't mess with the Flemish has been good advice for 600 of the last thousand years.
To the (intentionally?) obtuse responses: intending to store passwords in plaintext usually means storing plaintext passwords in databases and doing authentication with that; and that’s what the gazillion of commenters replying to the title are implying.
Mistakenly logging credentials because of e.g. badly interacting HTTP middleware is still a very nasty bug, but it doesn’t count as intending to store passwords in plaintext.
And something similar has happened to me, and I’m sure many others: logging middleware storing sensitive user data that shouldn’t be visible to engineers with mere access to logs (not as bad as logging credentials in my case, but still).
According to the pci dss auditor I had when doing a pci das level 1 audit. The most common credit card leak vector is logging middlewares logging credit card nunbers
When I did application security I had to argue with developers about PCI and PII data in logs all the time. They would insist that there was "no other way" and that it was secure inside our system. I'd refuse to change the status of the vuln to anything other than critical. I found many similar vulns in our database that had been marked as false positives or non-critical by other infosec people in the company. The common thread was none of them had a background in software development so they just trusted what the developer told them. This seems to happen frequently in places where there's a culture of compliance being more important than actual security.
Can confirm, have broken GDPR before through logging customer (and customer's customer) PII. It lasted for about a month before we realised and I had to go through the logs 1 by 1 over 3 days to remove all the data.
Most frameworks blot out passwords from logs by default, so even a newbie programmer on their first day doesn't make the mistake of logging plaintext passwords, yet facebook somehow made that mistake...
It should raise eyebrows when the security practices of SWEs at a billion dollar company are outperformed by any newbie developer working a toy project.
Passwords are just data. If said data is not tagged in a way that makes it clear it is a password, finding an algorithm that will successfully blot out passwords in the general case is intractable without being far too aggressive to be useful.
All such tools rely on assumptions about what will be logged following certain rules that the logging can check against - it's not hard to accidentally convert data to a format that when logged happens to fail these kinds of tests.
They should have caught it. But it's not surprising that it occasionally happens.
Assuming, of course, that the data is logged as a parameter rather than as a raw string, or as an instance variable in another object, or any number of other ways. Developers thinking it is "not that hard" is a big red flag to me, suggesting odds are high your logs are full of things that should not be there. Using filters is a first step only.
I think it’s actually much easier to make such mistakes at large companies with sprawling codebases and potential for settings that inadvertently end up logging sensitive data. Especially for nobody to notice either.
True but they're also better resourced in terms of humans (and their experience) and tooling that should prevent or at least catch any blunders quickly.
You’d be surprised, but from my first hand FAANG experience that is definitely not the case. Tooling can capture things that are known to it - but not everything is setup perfectly.
> It should raise eyebrows when the security practices of SWEs at a billion dollar company are outperformed by any newbie developer working a toy project.
And it turns out that plane crashes are much more serious failures than logging sensitive data internally. Not to say what Facebook has done isn't an embarrasing failure that really shouldn't happen, but they're clearly not the same thing.
Indeed, this is a common vector for leaking PII and sensitive data. For example, what looks like an innocuous logging/print statement in an exception handler ends up leaking all sorts of data.
And it gets more messy when you start to ingest and warehouse data logs for on-call monitoring/analytics/etc, and now you have PII floating around in all sorts of data stores that need to be scrubbed.
In a previous job, we handled credit card numbers. We added PII detectors to logging libraries that would scrub anything that looked like a credit card number. We used client-side encryption where the credit card numbers are encrypted on the client before sending to the backend, so the backend systems never see the plain credit card numbers, except for the system that tokenizes them.
Also, it is not against GDPR per se to store passwords in plain text. It is required to keep user data safe from unauthorized access and processing and encryption is one way to help with that, but if you have it sufficiently protected by other means it would be OK under GDPR to have it in plain text.
Avoiding ever storing passwords (or credit cards) in plain text [1] is actually harder than you might think.
Even if you outsource password handling by using some third party authorization service so passwords should never even touch your servers, and handle credit card payments by having them handled on your checkout page by a frame or JavaScript or something that only ever sends them directly to your payment processor, you can still end up with the damn things in plain text on your servers.
How? Because you receive emails to support that look like this:
> Hello; I'm a subscriber to your service, account "Bob666", password "S3cr!t", billed to my credit card 4111111111111111 with security code 123. That card is expiring. My new card is 4012888888881881 with security code 712, expiration date May 2026. Please use my new card for renewals. Thanks!
You may also receive messages like that in chat if you offer support via chat. And maybe even to email addresses other than support.
So now you've got a plain text password and two plain text credit numbers along with their security codes stored in the inboxes of everyone who is on the support list, and possibly also somewhere in the database of your ticketing system if mail to support automatically creates a ticket.
It gets worse. If you offer support by phone and that goes to voicemail after support hours you will find passwords and credit card numbers in there.
[1] Note: technically probably almost nothing is actually stored in plain text nowadays. It's almost always going to stored on a filesystem that is using filesystem level encryption, and that filesystem is likely on a block device that is doing block level encryption. But I believe when people talk about "plain text" storage it means at a higher level. If I store the string "this is a secret" in foo.txt, that counts as plain text even though foo.txt is on an encrypted filesystem on an encrypted disk.
Is an odd concept. Is the argument that nobody noticed? If somebody noticed, but the cleanup was deffered, them they did "intend to".
It's like defending a bank robber by saying that he didn't intend to rob the bank, he just had a gun in his hand, and then he figured the damage was already done, so he may as well get some money.
I think the comment is the context of being a software developer. "Everyone" knows you shouldn't do that, so it would be a bit odd if the company of Facebook's size would. But if it was accidental, then it makes it clearer how it happened. It's still a grave mistake, but not unthinkable. I personally write bugs all the time.
"Everyone" does not know that. Avoiding plain text passwords is the commonest method used on the Internet, but if you get in to mobile telecoms, you find shared secrets stored in hardware-secure write-only enclaves in clear text. This is actually done because in that specific environment it increases security. It's not a general solution of course, but "only store encrypted passwords" and "only store password hashes" don't always apply either.
I gotta level with you, not everyone knows you shouldn't do that.
There's a number of devs that don't think twice about storing sensitive keys in a git repo.
I could 100% see how someone would do this, see log messages with passwords in plain text, and then faff off being the last person to actually look at those logs. "K, this case is done, what's next"
I worked somewhere not long ago where the "solution" was to run code to scrub repos before building release packages because these packages were in some cases installed in customer networks that drastically increased the chance they might leak. At no point did it seem developers or the devops team realised that the fact they saw the need to do this meant that maybe they should apply the same checks used to scrub in a hook to root them on commit in the first place.
I agree, but also I know of devs that don't understand the basic security implications of passwords being in logs. I could easily see how someone, maybe even a couple of people, could see these logs and think nothing of them.