Is your assessment that "in proprietary software people can and do downplay vulnerabilities" based on looking at HN/news stories, or based on directly interacting with security teams at those companies?
In my experience, the worst security offenders are either small businesses or big businesses whose core competency is not in tech. My friend managed to download 50,000 passwords from GreatestJournal.com because they left their MySQL server exposed to the Internet, with no password, and the open-source LiveJournal code stored passwords in plain text in the DB. He reported the vulnerability to them, and their response was to put a password on the MySQL server (and take it off the Internet a few days later), write a blog post saying "You may want to change your passwords if you reuse your GJ.com password on other sites", and then take down that blog post a couple days later.
By contrast, when I worked at Google, a security bug was a drop-everything P0 bug. I recall grabbing dinner at In'n'Out at 11:00 PM because a potential data leak was discovered at 6:00 and the culture is such that when a potential security bug is discovered, you drop what you're doing, assess the impact, fix it, and don't do anything else until you've done that. And I didn't work on a security team, just an infrastructure one responsible for google.com.
In my experience, the worst security offenders are either small businesses or big businesses whose core competency is not in tech. My friend managed to download 50,000 passwords from GreatestJournal.com because they left their MySQL server exposed to the Internet, with no password, and the open-source LiveJournal code stored passwords in plain text in the DB. He reported the vulnerability to them, and their response was to put a password on the MySQL server (and take it off the Internet a few days later), write a blog post saying "You may want to change your passwords if you reuse your GJ.com password on other sites", and then take down that blog post a couple days later.
By contrast, when I worked at Google, a security bug was a drop-everything P0 bug. I recall grabbing dinner at In'n'Out at 11:00 PM because a potential data leak was discovered at 6:00 and the culture is such that when a potential security bug is discovered, you drop what you're doing, assess the impact, fix it, and don't do anything else until you've done that. And I didn't work on a security team, just an infrastructure one responsible for google.com.