Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can only speak for myself. I find docker to be immensely useful. Sure, there were always VMs, but I was very frustrated with multi-vm setups before discovering docker.

At the very least it let me iterate on system configuration / installation procedures more quickly. I have learned a lot just due to tweaking Dockerfiles and rebuilding them. Virtual machines are much slower in this regard.

In Data Science Docker is becoming revolutionary. People tried distributing virtual machines to let others reproduce their work. Dockerfiles are much more reproducible, and don't need a few gigabyte of seperately hosted VM images. Also these VM images usually contain tons of undocumented "state", whereas a Dockerfile is easier to reverse engineer and ultimately very reproducible.

You can include Dockerfiles in all your projects. Other developers can then go in and get started with a minimal amount of guesswork. Turns out "sane" development environments, which probably means one supposedly optimal configuration/setup/framework for all your projects, is the exception rather than then the norm.



With all due respect, you could get these exact same benefits by using Vagrant with a CM. Vagrant removes the need to manually manage the VMs, and the CM will handle the iteration on "system configuration / installation procedures".

It's my opinion that Docker images from Docker Hub are less repeatable than your average VM; as pointed out in the article there's nothing stopping someone from re-using a tag, and the contents of the VM depend entirely upon on what the upstream image creators felt was necessary (which can include things like grabbing config files from the internet in order to run a service).


> Also these VM images usually contain tons of undocumented "state"

Although I agree this approach is absolutely flawed, and anyone doing so needs to re-evaluate their workflow, I feel this comment is more an argument of doing things properly, rather than a positive of Docker. Everything you mentioned above can be easily achieved by using Vagrant and a provisioner. But as discussed in other comments, the Docker ecosystem helps you get started quicker.


I have used vagrant and I don't think that it is in any way better except when bridging operating systems. Typically, vagrant based workflows are slow and resource hungry, compared to Docker. I have Python projects where I can run test suites on multiple separate, fresh, containers on Python 2 and 3 in several combinations of dependencies, in a few seconds on a weak Netbook. Try that with vagrant sometime...

You seem to assume that everyone should be doing everything the "right way". Turns out they don't. I found it easier to deal with stuff not being done "perfectly" than to try and argue other developers into doing stuff "right".


Vagrant certainly has it's own problems and frustrating bugs, in fact I outright refused to use it in the early days. There is differing opinion on "the right way", depending on what your priorities are. Mine are based on precision, perfection and beauty.


> Mine are based on precision, perfection and beauty.

As opposed to those who prefer imprecise, flawed and ugly solutions? Jesus dude, could you possibly come up with a more arrogant sentence?


I'm glad you replied, as it wasn't intended to be arrogant. Some people care about the end result, not worrying about how it gets done, as long as it's done. Others care about the journey, perfecting their art at every stage, with little consideration for the end goal. And some people are able to balance these things.

Have you ever looked at the source code of a dependant library, followed by an overpowering and compelling urge to write your own from scratch? Suppressing the urge to rage quit because your tools are fighting against you, not with you. For me, it is a constant daily fight to force myself not to do things "properly", for it's mostly an unrealistic and unachievable goal.

Believe me when I say that being a perfectionist is nothing to be arrogant about, and despite it being one of my major strengths in some ways, it's also one of my biggest flaws.

(I'm taking it on faith that you're not trolling, and have taken time to give an honest reply)


did you just say being a perfectionist is one of your biggest flaws ugh dude


> I can only speak for myself. I find docker to be immensely useful

It seems you are making an argument not for Docker itself, but for containers in Linux in general. To that point, something like the App Container Specification standard and one of it's implementations would suit your needs just as well, if not better. There are already 3 prominent implementations of App Container Specification, each doing their thing differently but all compatible and inter-changeable with each other and their images.


No, it is an argument for Docker's workflow. It seems allot of the criticism calling Docker unnecessary overlook how it makes distributing Linux containers as easy as git.


Is that not one of the criticisms? Downloading an unknown origin image and just blindly trusting it before you start adding your needed components?

Creating your own base image is really the only way to ensure the base is trustable.

And to that end -- App Container Specification has image discovery and downloading covered[1]

[1] https://github.com/appc/spec/blob/master/SPEC.md#app-contain...


If your argument is that it is too easy to share then you are going to have a hard time winning over users.

But honestly how hard do you think it is to integrate secure builds into a docker system? I would just stand up a private docker repository and lock the build system to that, problem solved. Or docker could roll out an update to leverage the existing namespaces and combine that with a user controlled whitelist of public key & namespace pairs. The reason docker has enjoyed this much success is because they understand sharing is #1.


> Or docker could roll out an update to leverage the existing namespaces and combine that with a user controlled whitelist of public key & namespace pairs.

Could, but haven't.

> The reason docker has enjoyed this much success is because they understand sharing is #1.

Except they have fought tooth and nail against a standardized specification until App Container Specification was released. Docker isn't about "sharing", it's about vendor lock-in.


Thanks for sharing, I've seen appc mentioned in a few comments, I'll check it out properly tomorrow. You also raise a good comment about blindly trusting containers which, despite having concerns about this in the past, regrettably I didn't touch on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: