Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Microsoft has had 2 Heartbleed-level vulnerabilities in its Windows code so far, that were not just 2-3 years old but 10+ years old, leaving systems vulnerable to them for much longer.

The "advantage" of proprietary code here was that Microsoft got to downplay them (surprise surprise, no scary logo made by Microsoft for them!), and that's how proprietary code owners deal with security issues in general - they try to hide that they exist to keep the illusion that the software is (more) secure.



This is the second time today you've spread innuendo about how software security teams at big companies handle vulnerabilities, and the second time you've managed to casually insult teams that include some of the best software security people in the entire industry.

Here's the first:

https://news.ycombinator.com/item?id=9445436

These are egregiously bad arguments you're making, involving people who you don't know but that, from my experience reading so many of your comments, I believe are operating many levels above your own comfort level with actual software security.

The trouble is, like me, you comment on HN all the time, and so, like me, you get a huge name recognition boost for these comments you make. People reasonably believe that you know what you're talking about when you "explain" to them how Apple and Microsoft handle vulnerabilities. But you don't, and so these misleading comments prey on their lack of information.


Are you contesting the fact that Microsoft downplayed some of its major remote code execution vulnerabilities or that Microsoft has been sharing zero-days with the NSA for years (and now Apple is doing the exact same thing - voluntarily)? I can come back with sources, but I think you know exactly what I'm referring to.

Maybe I painted with a too wide brush the "proprietary software teams pretending the vulnerabilities don't exist", but the "security through obscurity" expression didn't just come out of nowhere. Many proprietary software companies do try to hide or downplay their security holes when they happen - you can't seriously tell me you you're contesting that?


He is actually more or less correct in the case of both Microsoft and Apple.


You know that's not true.


I know it is true, and I know you probably know people who work(ed) there, and they'll confirm it. This isn't because there aren't good security people who want to do the right thing, it's because management and PR doesn't support it.

You'll notice that the recent TLS bulletins were the only ones to credit an internal finder. There's a reason for that, and it's not because MS people aren't finding vulns in their own software.


You're here arguing that Apple traded zero-days to NSA in order to get Apple Pay contracts with USG. At what level am I supposed to take this thread seriously?

My issue is that the comment that roots this thread is, as you discreetly agree, totally uninformed, yet written in a tone suggesting direct knowledge of how vulnerabilities are handled at big software companies.

If you want to discuss the subtleties of internal vs. external reporters, patch timelines, prioritization, pre-disclosure, and things like that, I'm up for it, but not until there's clarity about what --- let's call it: --- "the HN community" --- does and does not know about this stuff.

I am fine with random commenters on HN talking about vuln disclosure without ever having reported a vulnerability or triaged a report. I don't think our having spent so much time with those things makes us special. I am less fine with people pretending to know things that they don't, or, to be as charitable as I can, writing comments that carelessly create that impression.


What? No. Hah. That comment is indeed ridiculous, and isn't the one I'm agreeing with. That appears to be a different thread. I'm only referring to:

Microsoft has had 2 Heartbleed-level vulnerabilities in its Windows code so far, that were not just 2-3 years old but 10+ years old, leaving systems vulnerable to them for much longer. The "advantage" of proprietary code here was that Microsoft got to downplay them (surprise surprise, no scary logo made by Microsoft for them!), and that's how proprietary code owners deal with security issues in general - they try to hide that they exist to keep the illusion that the software is (more) secure.

Related to the other claim, I'm fairly certain that the NSA does get pre-disclosure of MS vulns via MAPP, but it's pretty obvious that this is a side effect of disclosure complexities and not a quid pro quo deal. China also gets the same pre-disclosure, or did, until they got caught leaking it. But they probably still do, one way or another.

I'm actually laughing out loud at the Apple Pay in the USG thing.


I don't think it's that funny anymore. It's funny until you realize lots of readers believe him.

If you want to roll the discussion up to the original claim on this thread:

* Yes, closed-source vendors, Google included, do not as a rule announce severe flaws they find (or contract to find) in their own code. On the other hand, when we're talking about Google, Microsoft, and Apple: they are actually finding sophisticated flaws in their own product, which is something very few open source projects can credibly claim.

* Open source projects are not as a rule particularly awesome about handling disclosures either. See, for instance, AFNetworking.

* Nobody made logos for HTTP.sys (that I know of). On the other hand, HTTP.sys was a bigger deal in the industry than POODLE or pretty much any other vuln with a name besides Heartbleed, which, because it implicated a library used by lots of products, was particularly prevalent among Internet SAAS sites, and was easier to quietly exploit than an RCE, was a legitimately more important flaw. The idea that Microsoft gets a free pass for vulns is something you can only believe if your only contact with them is HN.

* There's no evidence Microsoft hid those vulnerabilities. They were there for 10+ years because nobody found them. In Microsoft's case, you can't reasonably claim that's because they weren't looking. Minesweeper gets more pentesting effort at Microsoft than most open source crypto projects.

I strongly prefer open source software to closed-source software, but I'm not unrealistic about how open source security works. See security tire fires such as: OpenSSL, Rails, PHP, Cryptocat, BIND --- each distinctive not just for having vulns but for the manner in which they've historically handled them.


The above comment isn't attacking "teams with great software security people" but the fact that in proprietary software people can and do downplay vulnerabilities (not a very controversial statement).

I've noticed tptacek over the past few years that your comments have shifted from great general security advice to more defending "the security profession". Please consider this shift and whether it is helpful.


Is your assessment that "in proprietary software people can and do downplay vulnerabilities" based on looking at HN/news stories, or based on directly interacting with security teams at those companies?

In my experience, the worst security offenders are either small businesses or big businesses whose core competency is not in tech. My friend managed to download 50,000 passwords from GreatestJournal.com because they left their MySQL server exposed to the Internet, with no password, and the open-source LiveJournal code stored passwords in plain text in the DB. He reported the vulnerability to them, and their response was to put a password on the MySQL server (and take it off the Internet a few days later), write a blog post saying "You may want to change your passwords if you reuse your GJ.com password on other sites", and then take down that blog post a couple days later.

By contrast, when I worked at Google, a security bug was a drop-everything P0 bug. I recall grabbing dinner at In'n'Out at 11:00 PM because a potential data leak was discovered at 6:00 and the culture is such that when a potential security bug is discovered, you drop what you're doing, assess the impact, fix it, and don't do anything else until you've done that. And I didn't work on a security team, just an infrastructure one responsible for google.com.


Google is the exception. Most major closed source vendors, even those known for their security, hide vulns that were not externally reported.


If tptacek's attitude has shifted, it's probably because the prevailing attitude on HN has shifted as well, in the direction of "never have people cared so much yet known so little".


What on earth is HN sheep's obsessions with making logos for security bugs? This seems to be a very new phenomenon that the young upstarts expect to see when a big bug is found?

This has nothing to do with Microsoft. It is about a false assertion (i.e. OSS peer review leading to less bugs) made so fleetingly that the authors clearly expected it to be accepted by readers as a proven fact. It isn't proven. And I gave just one of countless examples why OSS peer review has proven itself to not necessarily work as one might naively expect it to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: