Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Because running a Hyper-V virtual machine is significantly more resource intensive than running a true Docker container.

I suspect that no one really cares. CPUs are pretty cheap. The problem that docker solves is "0 dependency single-step installs, and reliable rollbacks" not "VMs are too slow".

This is true on Linux too, btw- linux containers are still pretty feeble from a security perspective, without draconian seccomp sandboxing.



No: the problem is that VMs are too slow, and therefore you need to spread them out over more machines, and therefore VMs are too expensive. Otherwise, why not just make every container a full EC2 instance?

All the analogous tech to Docker has existed in the AWS+OpenStack ecosystem for a decade; it's just that nobody is willing to pay $50/mo per container. An entire huge company (Heroku) lives off the profit margin that is created by paying for VMs, then selling containers at VM prices.


I agree with both of you, for different reasons.

The infrastructure benefits are completely there in terms of driving up utilization/density of the hardware you're buying. I feel (and know!) there will be a lot of optimization on making this happen.

On the other hand, there are some insecure multi-tenancy concerns for which VMs still offer better isolation. That's ok, because Docker is not about virtualization, it's about a platform for distributed applications.

Thus, finding the right/secure/optimal/? place to run your container should be straightforward and intuitive. That's the direction we're looking to take the industry.


> the problem is that VMs are too slow, and therefore you need to spread them out over more machines, and therefore VMs are too expensive

I think what you really mean is that you can't control the overcommit policy on AWS. Otherwise, if VMs were too slow, why would you be running on AWS at all?

> Otherwise, why not just make every container a full EC2 instance?

A lot of people have full EC2 instances running a single container. Or full machines. If you want to upgrade your database software, atomic, full system rollback looks pretty enticing, especially if your database machines are huge, hugely expensive, and you don't have that many spares.


There's no reason to have full EC2 instances running a single container—EC2 instances are containers. (And Docker images are AMIs, and fig.yml files are CloudFormation templates, and...)

You can treat EC2 instances exactly the same as you treat docker containers—attaching EBS volumes to them in the way you'd attach data volume containers, attaching ENIs like you'd publish ports, etc. You can get exactly your "atomic, full-system rollback" just by having a CF template with a template parameter for the DB AMI to start in an Autoscaling Group, and then pushing a change to that variable. (Effectively, this gives you the same semantics as using Heroku's "config:set" CLI subcommand.)

But people don't do things this way. Why? Because the different pricing models create a different system of incentives around the two ecosystems. EC2 instances are thought of, fundamentally, as machines, rather than as application containers. There are adapters meant to make their usability for such more clear (e.g. Elastic Beanstalk), but in doing so they expose the relative absurdity of the VM pricing model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: