Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The recommended way of doing this is to use Hyper-V and multiple VMs. Windows VMs start very fast on Hyper-V.

That gives you virtual LAN bandwidth control, "Storage QoS" (IOPS limits), partitioning, affinity and CPU limits. Not only that you have virtual SAN and network fabric. It's pretty awesome!

If you utilise it by deploying your app to a .wim file and then use wim2vhd and fart it at a Hyper-V host, docker already exists on Windows so I'm not sure what all the fuss is.

Yes we do this.



Because running a Hyper-V virtual machine is significantly more resource intensive than running a true Docker container.

Hyper-V and similar are great. They aren't as good as containers however. You gain more security with Hyper-V but even assuming full hardware support it is still an expensive thing to be doing.

On a normal desktop machine you can run maybe 2-4 virtual machines (depending on a lot of factors). On that same machine you would want to see 4-10 containers each container doing one "thing" and one thing only.


> Because running a Hyper-V virtual machine is significantly more resource intensive than running a true Docker container.

I suspect that no one really cares. CPUs are pretty cheap. The problem that docker solves is "0 dependency single-step installs, and reliable rollbacks" not "VMs are too slow".

This is true on Linux too, btw- linux containers are still pretty feeble from a security perspective, without draconian seccomp sandboxing.


No: the problem is that VMs are too slow, and therefore you need to spread them out over more machines, and therefore VMs are too expensive. Otherwise, why not just make every container a full EC2 instance?

All the analogous tech to Docker has existed in the AWS+OpenStack ecosystem for a decade; it's just that nobody is willing to pay $50/mo per container. An entire huge company (Heroku) lives off the profit margin that is created by paying for VMs, then selling containers at VM prices.


I agree with both of you, for different reasons.

The infrastructure benefits are completely there in terms of driving up utilization/density of the hardware you're buying. I feel (and know!) there will be a lot of optimization on making this happen.

On the other hand, there are some insecure multi-tenancy concerns for which VMs still offer better isolation. That's ok, because Docker is not about virtualization, it's about a platform for distributed applications.

Thus, finding the right/secure/optimal/? place to run your container should be straightforward and intuitive. That's the direction we're looking to take the industry.


> the problem is that VMs are too slow, and therefore you need to spread them out over more machines, and therefore VMs are too expensive

I think what you really mean is that you can't control the overcommit policy on AWS. Otherwise, if VMs were too slow, why would you be running on AWS at all?

> Otherwise, why not just make every container a full EC2 instance?

A lot of people have full EC2 instances running a single container. Or full machines. If you want to upgrade your database software, atomic, full system rollback looks pretty enticing, especially if your database machines are huge, hugely expensive, and you don't have that many spares.


There's no reason to have full EC2 instances running a single container—EC2 instances are containers. (And Docker images are AMIs, and fig.yml files are CloudFormation templates, and...)

You can treat EC2 instances exactly the same as you treat docker containers—attaching EBS volumes to them in the way you'd attach data volume containers, attaching ENIs like you'd publish ports, etc. You can get exactly your "atomic, full-system rollback" just by having a CF template with a template parameter for the DB AMI to start in an Autoscaling Group, and then pushing a change to that variable. (Effectively, this gives you the same semantics as using Heroku's "config:set" CLI subcommand.)

But people don't do things this way. Why? Because the different pricing models create a different system of incentives around the two ecosystems. EC2 instances are thought of, fundamentally, as machines, rather than as application containers. There are adapters meant to make their usability for such more clear (e.g. Elastic Beanstalk), but in doing so they expose the relative absurdity of the VM pricing model.


I disagree. With a generation 2 VM, I run 24 VMs quite happily on my 32Gb machine if you turn on dynamic memory. My 8Gb X201 runs 6 and Visual studio, sql server well. I haven't tried any more than that yet.

As for apps, IIS makes a pretty good container system if you want to use it for that sort of thing.


Unfortunately you need a Windows license per VM.

I guess that at many places you'd just buy datacenter edition, but that's a bit too much for small businesses.


If you're using Hyper-V, you get an unlimited number of guests if you've purchased Windows Server Enterprise Edition. Last I checked, it's not even that expensive of an upgrade.


Or just use Windows Datacetner Ed on your hypervisor. Most cloud providers have that as an option. It's like $200 a month and you can run as many VMs as your hardware will support.


Yes we have datacentre edition and VL stuff. Small business: Use azure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: