Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

if you're using EC2, why not just create an AMI and forget about configuration tools?


You can, certainly create an AMI preconfigured with all of your libraries, packages and services ready to go, but the issues arise over time with package updates, system updates, one-off configuration changes .. the list goes on.

The actual configuration of your system will drift further and further away from that templated AMI, leaving you to constantly have to build a new template every time you deploy a new machine, or manually make all of those differential changes to your new system.

Config management can be tailored to rapidly update the configurations of certain classes of systems at a greater interval than the constant cycle of blowing away machines, updating templates, redeploying, etc , etc.


That doesn't really solve the problem, it just moves it.

If AMIs are your deployment method of choice, you still need to build a repeatable process for applying updates and changes to your previous base AMI, testing that it still works under all your realistic configurations, and then gracefully deploying it to your production infrastructure.

Which is all doable, but not trivial. And the processes you build will be very Amazon-centric.


Having done this a dozen or more times... can you package your ami for your developers to use? Can you remake your image exactly the same way again as it was when you first imaged it should it be lost in the cloud (100% reliability isn't something AWS provides).

Provisioning tools let you create it, and incrementally update your image in a way that lets you redo it from scratch at any time.

That alone is the reason why I like the idea, not necessarily the resulting applications that have been created so far for the task.

Shell scripts could do the same and have for years if your only interested in from scratch setups.


This is not a difficult problem. I recently had to stand up a cluster of EC2 instances for a job that required a cluster of them and used these three steps:

1. Write a script to configure an instance and run it when the instance starts.

2. Clone the instance to an image

3. Run instances based on the image.

It's quite straightforward to do. See http://github.com/gyepisam/fcc-textify for more details.


> can you package your ami for your developers to use

SuSE Studio lets you do exactly that. (Though exporting to AMIs is just a feature, not the main objective.)


Because a lot of people don't use their brain, and realize that linux package management / image creation has been a solved problem since forever (I've been doing this since redhat kickstart circa 2001, and I'm a young guy).

However, there are some common sense tiers:

3rd party dependencies that changes infrequently = put in ami Common ops daemons and boot scripts = put in ami your application code and your configs = deploy via python/ssh client side

Also, one simply has to use ec2 instance tags to name things.

Result, you can have bit-for-bit identical instances fired up in no time with high confidence without need for a full OS package mirror. Without, of course, a ridiculously over-engineered configuration server framework, DSL, SPOF, security holes...


Why not just use linux packages + scripts?

I asked myself this same question several years ago. The basic advantages chef and puppet (and salt, etc.) offer over client-side python alone are:

A thin abstraction layer for basic system/platform information (facter/ohai). System information that you'd obtain using various tools like dmidecode, uname, df, /proc, ip, are all gathered and made available in a single data structure. Overkill? Maybe, but makes for much cleaner scripts.

A library for performing common tasks: package installation/upgrade, directories, files and templates, services, and shell execution. A full list: http://docs.opscode.com/chef/resources.html#resources.

Finally, a framework for organizing your configuration in a standardized way, with most of it in one place. With chef, for example, the idea is to put all the configuration for a single application in one "cookbook" and then if you have two different servers that both use that application in different ways, you define two different roles that pass different parameters to the basic recipes. Obviously the extent to which this one a benefit depends somewhat on your needs and personal preferences.

But I do agree that both chef and puppet are over-engineered. Puppet's DSL and chef's server are mostly overkill that I don't have much use for. But the DSL can be dealt with and the chef server is optional. There doesn't have to be a single point of failure.


We use Chef for distributed team. They download a very basic VM, run Chef, and are off to the races in minutes. When updates or config changes, they update from VCS and everything is configured and good to go. This is especially awesome for more UI focused devs that aren't as comfortable with Unixy tools.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: