"A booming industry of cybersecurity consultants, software suppliers and incident-response teams have so far failed to turn the tide against hackers and identity thieves who fuel their businesses by tapping these deep reservoirs of stolen corporate data."
Sure, blame the consultants with their "booming industry". I'm sure T-Mobile spent adequate amounts of money on securing their data, hired all the best people, and it was all the security peoples' fault for not doing it properly.
I don't doubt that T-Mobile could have done more, but it's also frustrating to see this trope that spending more money on security is some type of silver bullet. It's not.
I've been in security for over a decade. I currently work at a FAANG with nearly unlimited security budget. Previously I worked at another major tech company with nearly unlimited security budget. Before that I was a consultant and consulted at companies with huge security budgets. All of them, including my FAANG, struggle to have anything more than security that can only be described as "patchwork".
The truth is that nobody actually knows how to do security. Software devs are awful at it (the amount of FAANG engineers I know that don't even understand what encryption is, or think that hashing passwords is unimportant, would blow your mind), management is awful at prioritizing it or even knowing what to do in the first place, and every security professional in the industry is effectively just winging it based on what someone else in the industry promoted as "best practice" (and is probably outdated by now).
Sure, prolonged investment in security might help make things better, but that's not an overnight solution, and it might not be a solution at all given that the attackers are investing heavily in their methods, too. We have to do more than just acting like increasing the security department's budget is going to fix all of our problems. I guarantee it won't.
> Software devs are awful at it (the amount of FAANG engineers I know that don't even understand what encryption is, or think that hashing passwords is unimportant, would blow your mind)
But that's not because there aren't also lots of devs who understand security, it's because FAANG companies have purposely chosen to prioritize hiring based on leet code ability above hiring based on security knowledge.
edit: This is why software developers would benefit from a union or licensing process, because currently devs who don't understand security are artificially lowering developer salaries by externalizing risk onto users.
Eh, it's both. Other departments don't necessarily focus on security (and leetcode is certainly an idiotic way of hiring, IMO). But even in my department (where we explicitly don't use leetcode and do prioritize based on security expertise and offer a huge premium for it), we are significantly under our target headcount because finding devs (or any other role) that understand security is very, very difficult.
Could this be because so many companies don't focus enough on security? So there isn't enough collective experience out there, making it hard to find those that do have the knowledge and experience.
I believe this is the case. Engineers level up primarily based on experience, learning from their team, etc. Because security is:
a) Often not prioritized
b) Handled in the shadows by some other team
the engineers don't get exposed to it. Security hasn't gone through an 'operations' evolution where it melds with engineering so these problems aren't getting better.
I think partly so, yes. I also think in general the security industry is very bad at increasing the level of collective experience, so it sort of just stagnates.
Other fields like web development, consulting, engineering, lawyers, medical field etc all have very established career development pipelines, where you can join as a junior employee and learn on the job from those around you to become a better professional.
Security on the other hand lacks this. In the vast majority of organizations I've been in, security roles are something that you are expected to enter with an already established level of experience, and then you are dropped on a project by yourself with little mentorship or training. This makes it almost impossible to bring new people into the field.
At my company, we have a "security champions" program that is intended to allow software engineers to dedicate some of their time to security and help their team think through security challenges. But we really struggle with this program, because my company pretty much just hopes that the engineers signing up to be champions are already experienced in security. If they are not, we do not have processes in place to train them, even if they do want the training.
And what's worse, is that I even see resistance to making it easier for junior people to learn security. If you spend much time on r/cybersecurity, a common thing you will see is people insisting that security should not be an entry level job, and that everyone should be required to spend 5-10 years as a sysadmin before you're even allowed to apply for a security role. I think that's ridiculous, and not only for the reason that being a sysadmin has a lot less overlap with the world of security than people like to think it does.
> finding devs (or any other role) that understand security is very, very difficult.
At what level? Are we talking like knowing the different ways to mitigate XSS and other basic OWASP top-10 style things, or having the ability to find the next Spectre or Meltdown?
We recruit primarily for mid-to-senior level roles (5-15 yrs experience), and it's the former. I get a lot of candidates that can recite what XSS is at a high level, but for example struggle to explain the things to watch out for that would indicate a possible XSS vulnerability.
One of the other issues I see is that we should be able to take the above-described candidate, which is maybe not exactly what we need but shows promise, and train/mentor them into the type of security professional that we need. But my company (and most others I've seen) are also just really bad at security training and career development. It's a real problem, IMO, that security is treated as an "experienced people only" industry, and is not very welcoming to people that aren't already experts but are willing and able to learn. We are trying to change this in my organization, but it's slow and challenging.
> I get a lot of candidates that can recite what XSS is at a high level, but for example struggle to explain the things to watch out for that would indicate a possible XSS vulnerability.
To be fair, from a devs perspective you need to flip it around in your brain, in order to go from e.g. "you need to sanitize user input to make it safe for a javascript context" to "seeing unsanitized user input that could be getting injected into a script." Even if you know all the right answers, it's still probably not going to come out super eloquently. (And I realize there are other and better answers also, but just to choose one that's easy to explain.)
Something needs to be done at a fundamental level and finding some easier qualification in terms of security professional before this problem could be fixed.
One easy way to fix it would be market economics. Make senior security roles paid grade a lot higher than comparative other similar software engineering roles. These incentives should balance things out in time.
Otherwise I am looking at security professional death spiral.
Nah. First, actually being good at leet and knowing about hashing and such are not in opposition. In odd way, leet exercises makes lead to math parts of it.
And second, non leet devs are not some kind of safety panacea. The worst are people who don't care at all. Many have not heard of basics.
Third, if you actually decide that security is important and try to learn it, you will find resources are rare. There is very little of it targeted at developers. There is no shared knowledge base. There are no commonly known processes. Nothing like that.
So even if you care and try, you end up learning very little.
I don’t do anything security related — I’m a lowly bare metal programmer — but I’m still mystified as to how user passwords are securely kept on disk? The only thing I could think of was to encrypt a user’s password with their password…
Don't store them. Hash the password and store that, using a suitably strong algorithm that's relatively chunky and expensive to compute en masse (most, if not all, modern options, such as scrypt, Argon2, and bcrypt, support a scaling work factor so that in the future you can increase the work needed as computing resources increase). Then you can compute a hash based on the password that's passed in and make sure that they match.
Some folks will then further encrypt the stored hashes such that a database compromise, but not an application-server compromise, leaves the attacker without the keys necessary to decrypt even the hashes, but I am ambivalent about the usefulness of that (can't hurt, but the threat model for that seems more geared towards internal threats than external).
>I don’t do anything security related — I’m a lowly bare metal programmer
Sorry to make an example of you but this kind of attitude is the problem. Everyone does something security related. If something is giving input to the machine (that could be typing on a keyboard, collecting data from a sensor, or anything else), you have to care about security. Even if security means in your context sanitizing inputs to make sure you don't overflow and crash, or write something to the screen you're not supposed to, etc.
Full disk encryption (FDE). You provide the password at boot and either you can or can't decrypt (typically the key itself is derived from the password). You can also do this without FDE by doing the same thing but keeping the password around in memory if you're trying to avoid prompting them.
Modern machines work slightly differently. The key material is stored in a TPM which is a separate processor & dedicated memory that is purpose built to withstand physical and electrical attacks. Apple devices specifically have a complicated key wrapping scheme (protected by your pincode or password) to make certain files accessible/inaccessible depending on the policy defined (available after first unlock, available only when unlocked, available always, & a fourth one I forget). Your password is just used for protecting the underlying keys but the device actually generates strong key material that's used to protect all on-disk contents regardless of a password being present IIRC.
If you're talking about the password database for local login & whatnot, that was available without even having FDE by using PBKDF2 or similar to securely hash the password. That way you only store the hash & leaking that file doesn't mean that someone can reverse that back to get your password.
Multilevel encryption. It's like you keep valuable stuff in one room, a key for that room is kept in another room, that room not only needs a key, but also a 4-digit pin code, finally that key is kept in a safe that can be opened only with three other keys and so on.
> I don't doubt that T-Mobile could have done more, but it's also frustrating to see this trope that spending more money on security is some type of silver bullet. It's not.
So true. A problem is that "spending money on security" is so nearly always a synonym for increasing the infosec budget under the CISO. Which is useful, yes, but only a partial solution. A bigger ROI would be to spend it on developers who are experts in security and building a culture that cares. But even in enterprise security companies (most of my career), product security is so often seen as a checklist that infosec will take care of, not a core engineering competency.
This makes no sense at all---you're implying that the bad guys somehow have a monopoly on innovation and effectiveness, when in reality, there is just more upside for them to steal sensitive info than there is downside for companies to protect it. If T-Mobile's latest data breach led to them getting fined, say, $5 billion, I promise you it would be the last.
It would be the last for T-Mobile because it would end T-Mobile. But it wouldn't be the last breach ever.
I could give $5 billion to my FAANG right now and I bet we'd still be breached (hell, I'm pretty sure we already have that budget in my FAANG's security department). The US DoD already has a cyber security budget of $10 billion, and they still get breached.
You underestimate the amount that these companies care about security. Just because they get fined "only" a couple hundred million dollars doesn't mean they aren't scared shitless by being breached. I've sat in boardrooms with CEOs telling us they were willing to pay whatever it takes to increase their security (and they put their money where their mouth is, too). They still get breached.
Budget isn't everything. Does it help? Sure. Like any other security professional, I can recount plenty of tales of teams deprioritizing security in favor of something else. Would they have done differently if they were incentivized better by bigger potential fines? Maybe. Would they have actually been able to implement ironclad security even if they did prioritize it? In the cases I've seen, it's doubtful.
edit: and consider this. If you truly do think that money is everything, you should realize that you will never be able to throw more money at your security than a nation state attacker like China will be able to throw at breaching your security. In the competition of who can spend the most money, you've already lost.
Just to add to that, consider the hacker (technically cracker) only has to be right once, the security team has to be right 100% of the time and with 100% of the attack surface. There could be a new attack surface that wasn't even a thing at any given moment. Also consider a lot of the attack surfaces are software not even written by the company being attacked (Windows/Routers/etc).
It's like the 2000 era adage, the terrorists only have to be right once.
> I've sat in boardrooms with CEOs telling us they were willing to pay whatever it takes to increase their security (and they put their money where their mouth is, too). They still get breached.
Money flows (often) freely but it's not enough. I worked at one place where the CISO was very aware that security needs to be designed into the product ground up. Later a new CISCO came in who thought that security can be achieved merely by purchasing every security scanner on the market and sit back to bask in perfect security. Needless to say security was far worse with the latter one.
I'm sure it's both. As in, much of what they did spend likely went to snake oil salesmen. I've met lots of security consultants who did not have backgrounds in math or compsci.
> I've met lots of security consultants who did not have backgrounds in math or compsci.
My experience both working at and with higher end consultancies is that there is no correlation whatsoever between those degrees and any particular consultant’s competency. Some of the best people I’ve worked alongside have been college dropouts and Religion majors.
Likewise, I've never found any correlation between those degrees and security improvements delivered by consultants. Honestly, the best security consultants I know of are essentially con men (and women!) who have devoted their amateur psychological instincts to good. You can apply all the best tech but without organizational change it won't last. On the flip side if you bring organizational change to adopt security in depth as a value then even substandard tech can serve the purpose. In that vain, the best security consultants (meaning someone hired temporarily for their expertise – not a long term employee hired by renewable contract) are those who can imbue leadership with the vision of their organization as one that benefits financially from security as a cultural value. I'm not sure who did this for Apple but they are a good example of a company that has benefited from a reputation earned by truly valuing security instead of trying to merely make sure everything is secure.
One of the biggest problems in the security industry is a misconception that security and computer science are the same. They aren't at all.
If you're doing low level design of crypto algorithms, you need to know math. If you're doing appsec reviews or pentests, then a background in software development might help (but is not required).
But there is an entire world of security roles out there that are essential to implementing security that have nothing to do with math or compsci. The security industry right now has a huge problem with gatekeeping, where they think you can't even begin to think about security unless you're already a top-tier principal engineer, and it's led to a huge drought of talent in security roles across the board.
And yet, (correct me if I'm wrong), a good security person does not need to understand cryptography. He should have some basic understanding of how to apply it, but the knowledge of it's internals and the math behind it is pretty much useless.
Yeah from the outside looking in, to me the biggest requirement is one of mindset, thinking like an attacker, thinking of all the possibilities… in that sense very much like the qualities for a good QA person
true, crypto(graphy - wow, been so long since i've typed it that I've just realized crypto has now been bogarded for something else).
theory vs applied but I think its still true the mindset of a hacker is still very different. ie similar to the whole IT vs dev
> I'm sure it's both. As in, much of what they did spend likely went to snake oil salesmen. I've met lots of security consultants who did not have backgrounds in math or compsci.
I'm going to bet that they did have qualified engineers, because I like to assume the best in people, but I also assume that those engineers may not have been able to make the changes they want to.
In my experience in big companies, corporate bureaucracy and a complete unwillingness to change processes or systems is usually a bigger hinderance to security than the skill level of consultants/engineers.
You can't easily "bolt on" security to a massive internal ecosystem of insecure projects that has built up over the years. If I had to guess, I would anticipate the software T-Mobile is running includes a lot of legacy that hasn't been fully maintained. If they don't spend the cash to retain developers who built these projects or to keep them maintained, it means there's nobody around who really knows the codebase. And that means funding the little security edge cases is going to be nearly impossible, particularly for an external contractor with a few months.
Worse, the "upper management" will assume it was a talent / investment problem since "they sunk so much money into security". Oh that darn booming industry.
"To think we paid those security consultants so much money to protect our completely unencrypted and exposed database and we still got hacked.
And they had the nerve to suggest we replace this unencrypted database, which an old legacy system needs entirely open root access to with something secure for an eye watering bill - we don't hire security consultants to replace our legacy systems, we pay them to stop unauthorised people accessing the big pile of data we leave in the open.
Get the gall - they even wanted us to change the interface between our two big legacy systems because it was just a CSV file which contained all our sensitive data on it. Wimps! Especially as we told them they could do anything to make our systems secure, as long as they didn't touch those legacy systems."
What do you consider a background in compsci? A few years in the industry?
Because my degree is in Management Information Systems (MIS), but I've done troubleshooting on both performance problems of the O(n^5) variety and problems of the "not covered in the requirements document" variety... Not sure what else I need to understand, say, memory bounds-checking problems or firewall/ACL configuration problems.
Management Information Systems. A "business oriented" computer degree. They were popular in the 80s as an alternative to comp. sci. They focus on how to use databases and spreadsheets, and other analytical and management systems. In those days, "decision support" software was a big thing. Is MIS still a thing anymore?
It's still a thing, or at least it was a few years ago. I worked with several recent MIS graduates at a consulting firm in the mid-late 2010s. But I'd never even heard of the degree before that point, I majored in math, minored in CS, and did dissertation work in a business school (admittedly, economics, so not particularly business-y).
It's surprisingly easy to get certified. I managed to pass the difficult-by-reputation CISSP exam without any deep knowledge of or really interest in information security. I just took the five-day crash course my company paid for and bob's your uncle, I passed the CISSP.
Of course, I never actually got certified because I left the role immediately afterward and never bothered following up. Moreover, I didn't really meet the requirements, which included having some tenure as a security professional. But I'm sure I could have finagled it if I had any interest in working security (I absolutely did not).
Are there any certifications that require you to solve a CTF or otherwise demonstrate understanding of the field? (Just spitballing, but maybe an oral-defence of strategy against a board of defcon panelists? Etc)
Braindump-able IT certs benefit no-one, and expecting people to have MSc degrees in infosec is elitist and very impractical.
Offensive Security certs (e.g. OSCP) are similar to what you're describing. The PNPT is similar too but also emulates a real-world engagement on top of just needing to root boxes.
"A booming industry of cybersecurity consultants, software suppliers and incident-response teams have so far failed to turn the tide against hackers and identity thieves who fuel their businesses by tapping these deep reservoirs of stolen corporate data."
Exactly. Heaven forbid we blame the corporations whose lax security led to the stolen data in the first place. That would make advertisers unhappy.
I had to manually change the urls in their site to opt-out of some data sharing a couple months ago.
Something like that getting shipped to prod... yeah, you have the D team building tech at tmobile. So we should collectively be shocked if their codebase isn't a leaky sieve.
Sure, blame the consultants with their "booming industry". I'm sure T-Mobile spent adequate amounts of money on securing their data, hired all the best people, and it was all the security peoples' fault for not doing it properly.