Sorry for not seeing your comment until now. Amazingly great vuln BTW. It's early in 2017, but this is probably going to be one of this year's best.
It's very important for everyone to understand my advice is RHEL/Fedora specific, which is---I think---the source of the misunderstanding here.
Putting aside `ptrace` being the best way to guarantee a race win, the reason for my focus on `CAP_SYS_PTRACE` is that with SELinux enabled there is no other way to exploit having access to the file descriptors. Even if you explicitly try to pass a containerized process an external file descriptor "legitimately" (e.g. with `sendmsg`) SELinux will still ultimately block the access due to the type restrictions. This means that with `setenforce 1` you need to use something like code injection to get the external process to access the file descriptors on your behalf.
> Sorry for not seeing your comment until now. Amazingly great vuln BTW. It's early in 2017, but this is probably going to be one of this year's best.
Thanks. :D
> Putting aside `ptrace` being the best way to guarantee a race win, the reason for my focus on `CAP_SYS_PTRACE` is that with SELinux enabled there is no other way to exploit having access to the file descriptors. Even if you explicitly try to pass a containerized process an external file descriptor "legitimately" (e.g. with `sendmsg`) SELinux will still ultimately block the access due to the type restrictions. This means that with `setenforce 1` you need to use something like code injection to get the external process to access the file descriptors on your behalf.
Ah okay, yeah I suspected that's what you meant (on _RHEL_ xyz is the case). Thanks for clarifying.
> [...] you meant (on _RHEL_ xyz is the case) [...]
>
Totally on me. We fight against it, but it's hard not to have the implicit context of RHEL/Fed be omnipresent on the Red Hat bugzilla. In fact, when I wrote the comment in question I had just finished lighting my incense to the sīla of `systemd`... :)
Did not realize you were in Sydney. AU truly has the best hackers.
As I've discused before, seccomp can block ptrace and thus this VDSO-based attack (and currently does by default in some distros). Shameless self-link to that post:
I hate that the isolation of containers gets oversold as a security feature because there is real value in what you might call "configuration isolation".
Often, I am reluctant to run something not because of a trust issue but a complexity issue. I run a heavily customized environment. I will often be burned by an application---for example---creating a symlink that under "normal" circumstances is perfectly copacetic but all but destroys some carefully crafted aspect of my environment. Similarly, isolation that is not up to the task of stopping evil is often more than adequate for stopping stupid (e.g., the recent "Steam deletes your home directory" issue). How often have you updated your system only to have one or two apps misbehave? With what jessfraz presents here, yum and apt become tools you can apply selectively. There are real non-security benefits to be had.
I realize that part of the oversell is the nature of hype but I can't help but feel that a---perhaps---equal part is that talking about these kinds of benefits is a more subtle and nuanced conversation.
Having moved to Australia in recent years, I was very happy to see that JayCar, their equivalent to Radio Shack, seemed to be doing quite well. This is judging by what I've seen of the stores in what is considered a fairly backwards part of Australia (Queensland).
The difference seems to be two-fold. First, Jaycar didn't go through an insane expansion (there are only 12 or so locations per state). Second, they've grown "out" from their core market of electronics hobbyist instead of betraying it. Just a few examples: custom car stereos, solar panel accessories, camping/RV electronics, boating electronics, and DJ-stuff like laser projectors. This is all in addition to the arduino and 3D printing stuff that you'd expect. There's a huge amount of cross-over between these markets, which works out great for Jaycar.
There is one additional factor keeping Jaycar healthy that probably shouldn't be underplayed. Due to mining and a less crypto-facist view of unions, Australia has not (yet) completely gutted it's blue-collar market and culture. Jaycar advertises almost exclusively to this "tradie" (as the Australians call it) demographic. Jaycar is marketed as a "manly" thing, not a "nerd" or "tech" thing. In fact, it's even drummed up minor controversy from feminist as being exclusionary with ads focusing on "leaving her to go to your man-cave" and whatnot. This part of Jaycar's success probably can't be replicated stateside. The market of people who have the skills to fix a small electric outboard motor and can still afford one died with the rest of blue-collar culture and jobs.
Likewise, in the UK, the dozens of Maplin stores remain places where it's possible to pick up components for electronics projects, alongside the kinds of other electronic products you mention. Evidently, there does remain a place in the world for stores of this kind, whatever can be offered online.
That's interesting. The UK also has more protections for small contractors (e.g. electricians, welders, etc.). I wonder if that's the common thread (meaning it's basically impossible to replicate in the US) or if Radical Hack could have stayed alive if it had just doubled down.
The overgrowth factor shouldn't be ignored though. It's a particularly sad effect of MBA-culture and perverse incentives that so many perfectly viable retail outlets grow themselves to death. Krispy Kreme is the perfect example of a chain that was obviously working and did everything it could to kill itself with growth. I sympathize that everyone involved in those decisions made ones that were rational for them and their personal gains, it's just a sad outcome for the chain.
The difference is significant, but if you need two resistors it's $0.50 each, or a bag of 100 from the internet for $5 + shipping + waiting, then having 98 left over. Unless it's a particularly common part you'll never use the 98, so you've spent $4 more.
I'm currently living in Australia, and the rent prices are fascinating. You see, to a certain extent, the housing bubble never "popped" (sudden massive drop) here. Instead of being bundled into investments or handled by real estate firms, a large portion of the market is privately owned (this was driven by "negative gearing" that gave tax breaks etc. for ownership). Sure, the market is depressed compared to pre-GFC, but property is still astronomically overpriced. The fascinating part: the rent is much more reasonable. It's like some odd market form of cognitive dissonance. The boomers refuse to believe that a crappy apartment isn't worth 5 million dollars in the sense that they won't sell any lower and list at that price, but at the same time they'll signal that the "real" value is the much more reasonable half million by charging in rent what would be the equivalent payment for a half million on a twenty year mortgage.
This leads to all kinds of odd pathological situations. As far as I can tell, purely paying agents in commission is much less common here. Combined with the unmovable listing prices this has led to what is basically welfare for real estate agents and the most lackadaisical salesmen I have ever witnessed, even adjusting for the generally easy-going culture. I'm used to having to ignore calls from agents once it's known that I'm looking. Here, it's the opposite you have to keep calling them. I only have my current place because my wife explicitly asked about it, to which the agent responded something along the lines of, "Oh yeah. That's been open for months." Since many of the property owners aren't professional (pooling the risk/cost of owning over many units) and cash-strapped, getting maintenance done is a nightmare. It's either all done by themselves (with the ensuing comedy that entails) or farmed out to agencies that still bill the owner mostly piecemeal (so that every issue is a situation akin to getting your insurance to pay your doctor).
I would like to think that when the boomers prove, despite their thinking to the contrary, quite mortal and start dying collectively in large numbers that this will correct the market. My worry is that instead what boomers die last will use the last of their block voting power to convince the government to ease restrictions on foreign investment and Australia will become much like Hawaii.
I'm in a similar situation having moved to Australia (Melbourne) from Southern California. In Irvine I was paying $2165/month in rent for a 3 bedroom unit in a big rental complex, here in Melbourne I'm paying $1700/month in rent for a 3 bedroom house with a backyard. Coincidentally my next-door neighbour has just put her same-size house on the market for $650k (http://www.realestate.com.au/property-house-vic-west+footscr...). Paying that mortgage would cost at least twice as much as I'm paying in rent which shows how out of whack Aussie house prices are.
But the markets for buying and renting houses are different. Rental prices are limited by local incomes. Property prices are only limited by the amount of credit available (currently effectively unlimited in a ZIRP environment) and controls, or the lack thereof, on foreign investors. Coupled with the incredibly idiotic policies in Australia of negative gearing and interest only investor mortgages (!), Aussies have created a monstrous property bubble, and the economic & political fallout from the pop will last for decades.
I dream about renting in Melbourne - i can get the equivalent of what i'm renting in Sydney for easily 2/3 of the price I'm paying now
You hit the nail on the head there - the price of a house is what a bank is willing to lend you. I can't see how someone can justify paying 620k for a rundown 1 bedroom, 1 bathroom apartment.
The second interest rates go up, or the price of properties come down, all of these people who are mortgaged to their eyeballs will suddenly find they can't afford to keep them, and the collapse is going to be terrible.
I know next to nothing of this property bubble in Melbourne, even though I'm from the originally. But it worries me since I'm going back at the end of this year and will need to find a home to buy. I currently own a little 2 bedroom flat I bought in the 90s for 90K which apparently now will go for over 500K. I observe that the prices in the suburbs I'm looking at are way over what we'll be able to afford.
Over the past couple of years looking at property online, I have sort of been hoping it is indeed a bubble and that it will pop before I need to buy. But then I'm stuck with my flat potentially being worth far less too...
Your correction will probably also start if interest rates ever come back up, and many of the owners need to actually spend some money on the mortgages. Interest rates have been low so long now that we have a whole generation that believes that borrowing money will be free, forever.
This phrasing unfairly conflates VM/hypervisor technology and containers. Containers being a pure software technology do require near superhuman ability to secure but VM/hypervisors can lean on chip-level separation.
People forget that in-chip memory protection didn't come about for security reasons, memory errors were a particularly dangerous and particularly common kind of bug and the hardware was extended to help with memory isolation. OS session ending memory errors are almost unheard of since operating systems have started fully utilizing the on-chip protection. Programmers didn't become "superhuman" at preventing these errors.
For similar reasons it's much easier for hardware-backed virtualization programmers to protect you from malicious business inside a VM than it is for OS or container programmers.
The real truth is that the difficulty of containment is proportional to the interface that is available to the contained process. You don't need VM or hypervisor technology to build a virtually unbreakable container. You only need to prevent the contained process from using any syscalls at all.
Hardware only seems better at this kind of stuff because (a) it's harder to find errata in hardware and (b) the syscall interfaces of commonly used operating systems are much larger than what the hardware offers, and were developed without keeping containability in mind. It is a well known fact that tacking on security features in hindsight is problematic.
You don't need super-human chip designers because, as you say, "the difficulty of containment is proportional to the interface that is available". Hardware doesn't just seem better because "the syscall interfaces of commonly used operating systems are much larger than what the hardware offers", it is better. It is easier to analyse, has a more limited state-space, has more provable behavior, etc.
You can't just argue away the fact that a certain class of error has been all but eliminated by hardware-supported virtual memory. Multi-tasking as we know it today would basically be impossible without it. The reliability of "just get it right" systems like the early Macintosh isn't even comparable to, for example, a modern Linux machine that uses the chip to trap large classes of erroneous memory accesses.
Given that we have the above, a case of a class of error that programmers seemed unable to eliminate (practically) eliminated, I'm not really sure what you're arguing. Are you saying that hardware designers of the 80's were superhuman?
My point is that there is no difference between software and hardware.
You don't need hardware to eliminate memory errors: software can do it as well. Two examples of this are the Singularity system that Microsoft Research built and Google's NaCl, where the system only loads code that can be verified not to access memory incorrectly.
Your claim that hardware is easier to analyze is also incorrect. Modern processors are extremely complex beasts and are not inherently simpler than software. All processors have long lists of errata. You may be mislead into thinking hardware is easier to secure because (a) those errata are less visible to userspace developers because the kernel shields you from them and (b) hardware developers invest much more resources into formal verification than software developers out of necessity (you can't just patch silicon). If software developers invested a similar amount of effort into formal verification tools, your impression would be rather different.
Again, the point is that there is no inherent distinction between software and hardware when it comes to securing systems. It is always and everywhere first a question of how you design your systems and interfaces and second a question of investment in development effort targeted at eliminating bugs.
"My point is that there is no difference between software and hardware."
Okay, now I see where you're coming from. Theoretically I agree. However, practically there are a number of things that make hardware different:
* Hardware has inherent "buy-in". The software systems you describe as also solving the memory access problem are basically opt-in frameworks. While you can make software frameworks hard to opt-out of (e.g. OS integration etc.) by definition... software runs on hardware...
* hardware solutions are often much more transparent. Again, your software example require a great deal of re-tooling. One of the most elegant aspects of the classic 80's memory access solution was how transparent it was.
* The ratio of software to hardware vendors has far fewer hardware vendors. Combine this with the fact that, as you point out, hardware is so expensive to retool and you create an environment where it is much more likely that a single hardware solution will be "correct enough" to enforce a constraint on software than it is the case that the majority of software will properly opt-in to a framework/code-correctly.
> You now rely on chip designers being super-human.
At this point I want to ask how we're defining "super-human." What level of reliability is considered to have "super-human" requirements? There are certainly very simple and clear ways that one product produced by normal humans is much more reliable than another. For example, if you admonished someone to wear their seat belt while driving, you would scoff if they replied "well then I'm just relying on seat belt designers being super-human."
I actually agree with this. I believe that, using the right techniques, both software and hardware can be produced correctly. It's a function of their design and complexity how easy it is.
It's also worth keeping in mind that modern processors are actually extremely complex and that they do regularly have errata, even though chip designers are extremely conservative in their approach by necessity (you can't just patch silicon) and are much more thorough and disciplined in their use of formal verification tools than the vast majority of software designers.
Agreed. It's also a question of complexity. Xen (for example) has a significantly smaller attack surface than the linux kernel because it just has less stuff to do
Even if you physically separate, you risk being exploited over whatever medium you have to communicate with the untrusted machine. There are no silver bullets, unless you count total isolation.
This vulnerability is a good example of how a security bug is still a bug. That is: even if all the bad guys went away there would still be problems. This issue was fixed (as far as I can tell) pre-1.0 for non-security reasons. See this discussion: https://github.com/dotcloud/docker/issues/5661 .
Basically, docker used to have a "drop" list associated with each execdriver. By default docker kept all kernel bestowed capabilities but would explicitly drop those on the execdriver's drop list. This created issues with compatibility. If an image was prepared on a kernel that didn't have a particular capability at all and was suddenly run on one that was, weird hard to diagnose behavioral differences could emerge. So now docker drops everything by default and the execdrivers have a "keep" list. Also, there's now a check for the kernel defining a capability before trying to mess with it.
There are---of course---still some issues. For example, docker drops everything then tries to add the capability back. That won't work for some capabilities because some can't be added back once they've been renounced. Still, the situation is an interesting example of how a security bug is still a bug and is likely to bite you in some way sooner or later. See also XSS flaws that limit valid inputs.
Yes. I pointed it for LXC originally, filing the bug with LXC on sourceforge and later re-filing on github. Nothing to do with docker. Finally, they implemented it. This is like multiple years of timeframe we're talking about. Later I pointed it out for docker. Just sayin'. There's a lot of us contributing here, and much of the work isn't under the docker banner.
"There's a lot of us contributing here, and much of the work isn't under the docker banner."
Of course! If my comment could be taken as part of the mass of prose that can be read to imply otherwise, I apologize. I don't want to undermine what Docker has done with providing a packaging format, UX, and history niceness. I also don't want to undermine the years of hard work that's gone into kernel namespacing etc. that makes it all possible.
It's very important for everyone to understand my advice is RHEL/Fedora specific, which is---I think---the source of the misunderstanding here.
Putting aside `ptrace` being the best way to guarantee a race win, the reason for my focus on `CAP_SYS_PTRACE` is that with SELinux enabled there is no other way to exploit having access to the file descriptors. Even if you explicitly try to pass a containerized process an external file descriptor "legitimately" (e.g. with `sendmsg`) SELinux will still ultimately block the access due to the type restrictions. This means that with `setenforce 1` you need to use something like code injection to get the external process to access the file descriptors on your behalf.