Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems that

1) You have not dealt with large enough data, since you advocate just creating VM copies or snapshots. Try that on 10 or 100 TB of data.

2) You haven't thought about what and how those initial CSIE configurations are generated. Do you hand tweak everything, make && make install onto a particular installation of a particular OS all the software then just spawn those? It seems that should go to the dustbin of history. You know essentially have a black-box that someone somewhere tweaked and you have not recipe on how to repeat it. If that person left the company, it might be tricky to understand what and where and at what version was installed.

If you have "configuration" drift there needs to be a fix to the configuration declaration and people shouldn't be hand editing and messing up with individual production servers. If network operations fail in the middle then the configuration management systems needs to have better transaction management (maybe use OS packages instead of just ./configure && make && make install) so if the operation fails, it is rolled back.



you advocate just creating VM copies or snapshots. Try that on 10 or 100 TB of data

There are many ways to take an image of an environment, not only VMs or snapshots. But if your system image includes 10-100TB, it could be argued that the problem of size really lies in earlier design decisions.

You haven't thought about what and how those initial CSIE configurations are generated.

On the contrary, generation should be automated. In the same way that a service to deploy to such an environment is maintained as an individual service project, the environment itself is similarly maintained, labelled, tested and versioned as a platform definition.


> In the same way that a service to deploy to such an environment is maintained as an individual service project, the environment itself is similarly maintained, labelled, tested and versioned as a platform definition.

A mix of two. Use salt/puppet/chef etc to bootstrap a known OS base image to a stable production platform VM for example. Then spawn clones of that. I would do that and I see how it would work very well with testing.


Agreed. I think PFCTs are good for generative part, not for the rest. They should be part of a build process really, not live infrastructure.


> if your system image includes 10-100TB, it could be argued that the problem of size really lies in earlier design decisions.

Without disagreeing with your conclusion about the design process, it's useful to note that this situation simply isn't a problem for a conventional configuration management tool.

> On the contrary, generation should be automated.

One could argue that Puppet and Chef are ideal tools for performing that automation.


this situation simply isn't a problem for a conventional configuration management tool

Sure. But loads of other stuff is. The weight of tradeoffs is clearly against PFCTs here.

One could argue that Puppet and Chef are ideal tools for performing that automation.

Absolutely agree - but not within live infrastructure. Only build.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: