Immutable Environments

Contents

Overview

The concept of immutable environments is that once a box is built it is never modified. Any change that needs to happen requires a new box to be created and deployed.

For this to happen the entire system needs to be under configuration management. This ensures that every component is managed and versioned.

Prior to cloud provisioning and configuration management servers were at best built from scripts, at worst manually configured.

Configuration management tools such as Puppet, Chef & Ansible have automated the creation and ongoing management of servers to ensure that they are in a known state.

Docker and Amazon AMI have help bridge the gap between the old approach to managing systems and the new immutable approach.

This extends the concept of build once , deploy anywhere that is applied to the continuous integration pipeline and artefact creation and takes it to its logical extreme.

Docker containers are created and in theory should move through the test phases into production without changing. Similarly Amazon AMI images can be created locally based on pre-defined AMIs, composed in a similar way to Dockers layered filesystem approach and deployed to Amazon.

Immutable deployments Simple overview on the concepts
Immutable Servers
Amazon EC2 AMI creation with Aminator
Hootsuite Example Build test and automate server image creation
Packer Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.
Immutable Systems with Ansible
Ansible + Packer
Ansible + Packer EBS AMI


Issues

The idea of immutable environments is a powerful one, though there are complications that need to be considered. Especially when you are dealing with vast monolithic systems and different deployment approaches through the deployment pipeline.

I plan to try and use a pragmatic approach to introducing immutable environments. It will be a while until I see them being deployed to production. Primarily due to the different ways we host our systems. Until servers are provisioned into a cloud (be it a private cloud or something like EC2) this mechanism doesn’t work.

Similarly there is the problem of the databases. These are many terrabytes in size and hosted on large database servers. We have ongoing projects to start breaking up the monolith. New applications are being built so they maintain their entire stack allowing for easier deployment and separation of concern.

I don’t see this as an all-or-nothing problem. If we adopt a mixed approach of mutable and immutable environments then it is apparent that they need to be built the same way. This has been my worry with Docker, unless they are provisioned in the same way as production systems you run the risk of introducing errors.

Until the monolith is broken up the complexity and scale is too big to transition the entire system through to production. Even if we don’t consider the database at this stage.

Current continuous deployment approach

We are currently treating all hosts in the same way. Bare metal boxes, VMWare virtualised environments and Amazon EC2 hosts. This allows us to ensure the same process is used for all environments.

Hosts are provisioned using Ansible with applications packaged as RPMs and managed via Yum.

One of the downsides of this current approach when dealing with Amazon is that there is execessive bandwidth and CPU being consumed in order to get a base AMI setup.

Using the concepts of immutable environments I aim to build the AMI locally and upload a new image. Thus reducing the amount of work that is carried out in California.

Update 15/01/15 From doing more research both Packer.io & Aminator both work on the instance in the cloud instead of allowing local operation. This is a real shame as it means that all the software needs to be available in the cloud via Yum. I will continue investigating this. Options at the moment are

  • Look are reducing cost by building AMIs in layers as opposed to building the stack in one go. This is sensible anyway and the playbooks are split to facilitate this.
  • Provision docker containers locally and publish these into either EBS EC2 instances or possibly ElasticBeanstalk. The later seems to have quite a few disadvantages according to this post

We have developed our Ansible playbooks and roles such that applications can be moved and grouped easily whilst still ensuring they are correctly glued together

  • Apache configuration on the web tier
  • Site management configuration
  • Services on the hosts

Development Ideas

  • Leverage standard build process with Ansible
  • Build docker containers and AMI images with packer.io using existing Ansible playbooks
  • Build base docker and AMI images via CI to reduce time to compose.
  • Deploying to test could involve uploading an AMI and allowing the customer to provision themselves.
  • Fig could provide a nice way of simplifying development on docker and building on pre-made containers.

TBC


comments powered by Disqus