Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you have container orchestration in place, being able to use it to run VMs via qemu is actually incredibly useful and isn't really much of a yikes. Sure, you're losing the container's isolation features, but you have a VM there, which is even stronger isolation.

We do CI for our VM images in our kubernetes clusters. The build system already was in kubernetes, so putting the OS image testing in there was a big win.

The benefit of doing this is also that on a personal machine you can start playing with an OSX vm with a single docker run command with no other dependencies and many people already have docker setup, whereas standardized qemu/virtualization tooling is now much less common on developer machines



I don't think you understand what this project is.

You have to fiddle with BIOS and kernel module parameters, install packages, configure KVM, etc on your docker host for this to work. It's not something that you can just throw into kubernetes, especially if you don't manage the kubernetes deployment yourself.

> The benefit of doing this is also that on a personal machine you can start playing with an OSX vm with a single docker run command with no other dependencies

There _are_ external dependencies that you have to set up manually. It's the same amount of work to set up on docker or to use a real VM, so I can't imagine why you would prefer this method.


To run 64 bit VMs, you always needed to turn on hardware virtualization in the bios. To configure kvm, all you need to do is "modprobe kvm" if it isn't already loaded. At that point, everything else is user-space and 100% of the user-space dependencies are installed in the image. All the docs about libvirt on the github page are unnecessary. So full steps are really:

1. Enable hardware virtualization 2. modprobe kvm 3. docker build 4. docker run

and you have an OSX VM.

> It's not something that you can just throw into kubernetes, especially if you don't manage the kubernetes deployment yourself.

GCP and Azure support nested virtualization, so you actually could do this in a managed kubernetes cluster. It's plenty common to use privileged DaemonSets in kubernetes to load kernel modules for filesystems, storage, or iptables rules. If you're allowed to run privileged containers, it's trivial to run VMs like this in kubernetes.


I work at Sourcegraph and have been considering something like this for a while for running long-running jobs, things like CI pipelines and GitHub actions for example.

How would you feel about an app like GitLab, for example, shipping a docker container that required privileges for this, I wonder?


It'd definitely be a harder sell for third party software to require a privileged container and /dev/kvm mounted when you run it in your environment, especially since nested virtualization is largely unavailable in AWS. It also requires that the correct kvm kernel module is loaded, etc.

However, if it was a product that required virtualization and that was recognized as a requirement, then also distributing a docker image that could do it would probably be useful for people in the "and if you don't have virtualization infrastructure, but have container orchestration and nodes that support virtualization, our service will also work in a privileged container" camp


Good points, I appreciate the response!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: