Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin

Re: '...Rather than do all this work with Puppet, Chef, or even Ansible, I can just declare what the systems ought to be and cluster them within code branches...'

It sounds like the poster is doing something similar to a 'Gold System Image' though with Container technologies.

Configuration Management is great, but I'm with the view that you should start with a combination of gold images/system builds and then stack on Cfg Mgmt on top of that.

Theory is you will have a known base and identical builds etc, otherwise you would get subtle drift in your configurations, especially for things which are not explicitly under control of Config Mgmt (shared library versions, packages).

Of course, this is not always feasible but if I were to start from scratch, I would probably try to do it this way.



It should be a combination, really.

You want to have your "Gold system image" available and this is most certainly what you should be deploying from; however your configuration management - whether it's Chef, Puppet, whatever - should be able to take a base operating system and get it setup completely from scratch.

This then solves the problem of ensure that you can completely reproduce your application from scratch; but also removes the possibly horrendous slow scale up time by using a "Gold image" that has already done this.

My current process is: Jenkins runs puppet/chef, verifies that the application is healthy after the run and everything is happy, calls AWS and images the machine, then it iterates over all instances in my load balancer 33% at a time and replaces them with the new image resulting in zero-downtime. Of course another solution is to pull out those instances and apply the update, and then put them back in.

And I'm sure someone else will have their $0.02 on their own process, which actually I'd love to hear :-)


Here's mine, with the preface that we're still iterating our deployment procedure as things are quite early.

I have an Ansible repo with an init task, which configures new boxen (makes sure proper files, dependencies etc. exist). Then to deploy, I have another task that ships a Dockerfile to the target boxen group (dev, staging, or prod) and has them build the new image, then restart. This happens more-or-less in lockstep across the whole group, and scaling up is relatively easy - just provision more boxen from AWS and add the IPs to the Ansible inventory file. Config is loaded from a secrets server, each deploy uses a unique lease token that's good for 5 minutes and exactly one use.

I'd love to hear how to improve this process, since I'm dev before ops. My next TODO is to move Docker image building locally and deploy the resulting tarball instead (though that complicates the interaction with the secrets server).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: