I’ve recently really been getting into a Software Delivery methodology which for me, wraps up a selection of the most potent benefits of Agile, TDD, Continuous Integration which requires Development and Operations to work very closely.

Holy cow, all those flashy words in a single description, that must mean this is some enterprisey buzzwordy new fangled thing, right? Nope. This is an extension of all those ideas that we know that we “should” and “could” have always been doing, and in fact some of you may already be doing it.

Continuous Delivery, in a nutshell, is about delivering your software in small, very frequent incremental updates through a huge quantity and quality of automation in build, testing and deployment. Traditionally, the release process for many teams culminates in a lockdown period for release followed by many hours or days of manual release process, followed by triage and observation. This is a broken and problematic approach and we all know it.

Instead, continuous delivery is the way forwards for delivering software and in future blog posts I will aiming to cover more of the implementation process, potential problems and cultural impact.

My interest was piqued when I attended a presentation by one of the authors of Continuous Delivery, which as the name would suggest is the currently definitive book on the subject. Continuous Delivery (Jez Humble and David Farley) David and Jez developed the principles for Continuous Delivery at Thoughtworks, where Jez currently also works as the Product manager for Go (yes, neither this Go, nor this Go).

Continuous Delivery is about automating everything from build to deployment into production. This also means that methodologies such as Continuous Integration and Agile testing could be viewed as a specific subset of Continuous Delivery. I also recently attended ‘Evolving Continuous Delivery‘ with the London Continuous Integration Meetup where Chris Read gave us the quote of the night:

Until your code is in production making money or doing what it is meant to do, you have simply wasted your time

The Value proposition for Continuous Delivery is very simple: getting business value into production as quickly as possible. By repeatedly deploying to production in a controlled manner we also:

  • Validate the business decisions which we’ve made more quickly by taking small steps towards our goal and gaining feedback before you’ve invested the full cost.
  • Allow us to deliver our business value at lower risk because we make smaller changes at each step. Smaller changes are less risky than large packages of change.
  • Encourages the culture that nothing is truly “done” until it is delivered to your users and more importantly proven before delivery.

Every build artefact should be considered a potentially releasable artefact, your comprehensive suite of tests are there to prove that it is not suitable. With each test passed, the build should be promoted through the pipeline to the next test. In order to be confident that your release is not risky, you need to be able to test everything in a release. Obviously, this means testing code changes, but also deployment processes and configuration. Being certain of your configuration also means testing your system configuration as well as your application configuration. We’ve all seen how “minor” OS patches can cause massive knock on effects to the performance or stability of an application unknowingly, it’s vital that you should be able to monitor and roll back. When you begin to consider what conditions you may want to roll back or draw attention to your release, you may need add specific monitoring and instrumentation to your application to monitor if you notice:

  • Increased Latency
  • Increased Load
  • Lost Dependencies after a configuration change
  • Poorer Business Performance
  • Failed Basic Functionality of the application

Finally, deploying continuously may require cultural and process changes to the way you develop your products. In particular, your SCM and branching strategy may need to reflect the need to release partially implemented features as disabled (also known as feature toggles).  A presentation worth watching on this topic is Chuck Rossi on ‘How facebook releases software‘. Your application architecture may also need to be updated to allow continuous deployment in a safe manner.

Continuous Delivery isn’t just for Web Applications (although it, clearly has massive benefits to them) but as the Google Chrome team have demonstrated, can provide great benefits for Desktop applications (even Delphi ones!) too although requires different tools and approaches. Imagine being able to push a series of very efficient diffs to all the users of your application every time it successfully passes a full range of exhaustive tests.

Many companies currently use some form of Continuous Deployment to manage their operations, such as Flickr, Etsy, Netflix and more who I’ve forgotten. Continuous Delivery is a topic that impacts all areas of operations, development, marketing, and possibly any regulatory concerns. I intend on diving into detailed posts of various areas of the overall topic.

Further Reading

I would like to thank my good friend Kingsley Davies for encouraging me to pickup this vein of blogging.

Tags: ,


  1. Colin Johnsun on the 24th June 2011 remarked #

    In the past few months, I’ve been taking a much greater interest in the whole DevOps approach with Continuous Integration and Continuous Delivery. For many years, almost everything I did in development pretty much resided in the confines of my IDE.

    Coding was fun but the actual delivery of the end product was always tedious and laborious. Over time, this got worse when delivering upgrades, patches and bug-fixes.

    Continuous delivery is all about automating that process, which is the reason I got into programming in the first place!

    My experiences in CI and CD is limited and your post is a great stepping stone. Thanks!

  2. LDS on the 24th June 2011 remarked #

    One size doesn’t fit all. And if a useless site like facebook is taken as a demonstrator about how you have to push software in areas where changes impact can be enormous, require a big effort and even threaten lives, well, IT has a problem…

  3. Colin Johnsun on the 24th June 2011 remarked #

    @LDS, so what is your point? By labelling facebook as being a useless site, are you implying that continuous deployment is also a waste of time since they practice it?

    And threatening lives??? Gosh, how can I argue against that! If continuous deployment is putting people’s lives in danger then obviously it must be a bad thing and the people who are pushing this idea to the uninformed must be reported to the appropriate authorities. This must be stopped! I’m sorry but hadn’t realised IT had this problem…

  4. jamiei on the 24th June 2011 remarked #

    @LDS: Facebook are definitely not the only company that push software regularly, nor are they a company that practice a full Continuous Delivery process.

    I’ve seen Continuous Delivery working in a wide range of organisations where the consequences of failure are severe. One of the massive advantages of CD is lowering the risk per release. Your confidence in a release will only ever be as good as your confidence in your tests, which is why I stressed the need for comprehensive testing at every step.

  5. LDS on the 24th June 2011 remarked #

    Deploying new features may require highly controlled softwware/data updates (including downtime…), user training and much more. It may be not as easy as it could be in some sites where you publish new features and let user find out what’s new and how to use it, if it doesn’t work rollback and try again later. I may be confident in *my tests*, but I may be far less confident in what users can do if an application changes frequently, especially if it is a complex one and not “Internet for dummies” a la Facebook, where “users” are actually the “product”. I’m a big fan of CI and automated tests, but automated tests imply knowledge of the tested feature. Can you simulate the behaviour of users who have to deal with a constantly changing application? Highly skilled one may cope with, less skilled ones may not. Unless you add “continuos training” to the schema, which may not be so easy while users have usually to perform their duties, not only learn to use your application. New features can go unnoticed, be misused, and the like. IMHO “continous delivery” may work in some situation and not in others, it’s not a panacea. Sometimes planned delivery and training is exactly whay you need. Still.

  6. EMB on the 24th June 2011 remarked #

    If delivery in CD means delivery to final client, then or you are developing a free application, or a big specific project or adding a Continuous Paying Bill to your client.

    Every thing outside of this means Beta Software, IMHO. YMMV

  7. jamiei on the 24th June 2011 remarked #

    @LDS: There are no panaceas but I object to your implication that it is “unplanned” to push constantly. By automating everything, I would argue that it is more controlled than manually pushing out releases, config and data changes.

    If a particular feature requires pre-training then add it as disabled and provide some way of enabling it once the training has been undertaken, this was why I mentioned changes in the architecture of the application and feature toggles.

    I’m not arguing that there isn’t a possible scenario where CD isn’t appropriate but none of the typical concerns that you’ve mentioned so far are too complex to overcome.

  8. LDS on the 24th June 2011 remarked #

    Who said that without CD you *manually* push out releases? You may have CD up to internal test servers, but not beyond. When it’s time to update production server the process is automatic as well – just not “continuos”.
    External test servers and production servers may not allow easy CD – especially if you’re in highly regulated environment (and some certification wants it) and you have to approve (or obtain approval for) each change. Adding disabled features is a nonsense to me – just adds complexity and may lead to unwanted risks if enabled by mistake.

  9. jamiei on the 25th June 2011 remarked #

    @LDS: Regulation is certainly one constraint that may mean you are unable to truly push the last mile everytime. But at least you’d have the systems, practice and testing to ensure that when you do, it is less risky.

  10. Gary on the 29th July 2011 remarked #


    True, some organisations require strict regulatatory compliance. In which case, you guys are stuck in the dark ages because of red-tape.

    I’ve been in the software business professionally for over 20 years, and none of what’s been mentioned in favour of CD is bad, wrong, or nonsense as you put it.

    If there are tests, more tests, and even more tests, to prove something is right then your confidence of the feature being right will naturally be higher than simply pushing a load of code live and hoping that the UAT people covered every angle.

    Feature toggles are good.

    How many big releases in your career needed to be backed out? What if you could switch the toggle (either on the fly, or through a restart), and everything was back as it was? (OK – there are exceptions to this case) but I think up to 80 of the time this is true.

    Once confidence of a feature is 100%, remove the old code, and move on. If you get it wrong the tests will tell you.

    What if you could automate the whole release process from one environment to the next, and more importantly, make it reliable and repeatable for each environment, including Production?

    Okay, regulatory requirements may get in the way, but hey, look what you gain in other aspects regardless.

    This can only happen if you do what hurts often.

    It’s not easy to get to this state, but when you do, there’ll be no room for people scratching their backsides.

    I hope you’re not one of those people.

Leave a Comment