Automated deployments are a very essential ingredients for any application and not just those wanting to practice Continuous Delivery. There are many configuration and deployment tools available to use, which offer you different architectures and options for describing your config.

Options, Options, Options

Recently, when I needed to spin up a Rackspace cloud server quickly for a side project, I developed a standalone puppet manifest for configuring my server to setup firewalling, my user, my favourite utilities (vim etc) and monitoring. The standalone mode of operation for Puppet doesn’t require a puppet server and if you can take care of distributing versions of your manifests to your nodes is a more lightweight way to go. Puppet is excellent for configuring a server and can certainly install your application for you, providing configuration as well as declarative package management.

Alternatives to puppet in this use are tools like Opscode Chef and CfEngine. However, these tools take care of so much more than application deployment are suitable for automating the build and maintenance of environments from the base install upwards.

Whilst these tools are excellent for building my environment, I wanted a more direct execution approach for deploying my application directly. I had been reading about tools like Fabric and Pallet in the Thoughtworks Tech Radar over the past few editions. Fabric appealed to me because of the sheer simplicity of the tool.

Getting started with Fabric

Fabric according to the website:

Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.

Doing fabric doesn’t require much Python but is also a library so if you have Python skills, you can make it into pretty much whatever you like. Getting started on the node that you wish to deploy from on an Ubuntu 12.04 system is as simple as doing a:

sudo apt-get install fabric

This only needs to be installed on the node running the deployment as fabric then operates via SSH to the target nodes. This then allows you to run the fab command, which will initially give you an error like this:

$ fab

Fatal error: Couldn't find any fabfiles!

Remember that -f can be used to specify fabfile path, and use -h for help.


Fabric can locate fabfiles in one of two ways, it looks for a file named fabfile.py or a python module (a directory with an __init__.py file) named fabfile/. Fabfiles can be stored in version control alongside your application code relatively easily. Writing your first fabric task is very easy:

# fabfile for running hostname

# import the fabric api, we should be more selective than this.
from fabric.api import *

# Declare our target host details. This config requires keys to be pre-configured.
env.hosts = ['my.remotehost.com']
env.user  = 'deployagent'

# The task itself, a decorated python function
def hostname():

You can then invoke this task directly with fab hostname, which will give something like the following output:
$ fab hostname
[my.remotehost.com] Executing task 'hostname'
[my.remotehost.com] run: hostname
[my.remotehost.com] out: my.remotehost.com

Disconnecting from my.remotehost.com... done.

The code sample above imports the fabric library, declares some details for the target host and then declares a simple task. The config for the target node above is extremely simple and relies upon keys being preconfigured for the running user. This config can very simply be extended to encompass multiple nodes and multiple roles. The API is very straightforward allows you to very easily move files to the remote hosts via SFTP with a single command. For example, here is a rather crude example of how I might build, package and deploy my latest side-project Clojure application:

# fabfile for building, packaging and deploying my app

from fabric.api import *

env.hosts = ['example.com']
env.user  = 'deployagent'

def build():
  with cd('/home/user/myapp/'):
    # Clean and uberjar
    local('lein clean')
    local('lein uberjar')

def package():
  with cd('/home/user/myapp/'):
    # Zip up our 4 deployable files
    local('zip myapp.zip my-app-jar.jar db.properties log4j.properties my-app')

def deploy_release():
  # Create our dir structure.
  sudo('mkdir -p /opt/myapp')
  # Copy the file across
  run('mkdir -p /tmp/myapp')
  put('/home/user/myapp/myapp.zip', '/tmp/myapp')
  # Unzip to the target dir
  sudo('unzip /tmp/myapp/myapp.zip -d /opt/myapp')
  # Setup the init.d script
  sudo('mv /opt/myapp/my-app /etc/init.d/')
  sudo('chmod a+x /etc/init.d/my-app')
  # Clean-up
  run('rm /tmp/myapp/myapp.zip')

def restart():
  # Assumes that you have already deployed your init.d script
  sudo('service my-app restart')

def build_deploy_latest():

This will build my app with lein and then zips the deployable files before deploying the app, copying the init.d script across and starting the service. Any of these steps can be executed independently of each other (e.g. fab deploy_release) or bundled into one (e.g. fab build_deploy_latest). You can also add parameters to provide a task a specific version number of your application to deploy, for example.

It’s important to note that this is a very simple example which is most definitely not good practice. I’m a firm believer in one of the principles of continuous delivery which is that an artefact should be built once and deployed the same way every time. With this in mind, it is good practice to always deploy a specific version from an immutable artefact store. It’s also important to note that as your fabfiles are pure Python, you have the full flexibility of Python in your hands to register nodes with load balancers, add nodes to service registries such as the Rackspace Service Registry or Netflix’s Eureka or other post-deployment work.

With great flexibility comes the ability to work yourself into knots, so having a standard set of tasks and names across your projects so you can rely on convention might be useful in the early days. Application deployments should also be kept as simple as possible to avoid having too much logic and scope for error built into the deployment scripts.

Further Links

Tags: , ,

One Comment

Leave a Comment