Our understanding of continuous integration has evolved over the years to mean more than just making sure various software components play nicely together. As the practice matures, a much more system-oriented view is emerging. Continuous deployment, the idea that every check in could end up in production automatically after passing through a rigorous CI process, is part of this systemic view.
Application code is one part of the system. So are the database, the configuration, the network, the documentation, and the host machine, to name a few others. One by one, these systemic concerns are being pulled into the CI process.
As the technical roadblocks to automation are removed, we’re also solving the political challenges: Operational silos are being knocked down in favor of collaboration. Product managers are thinking about how to structure features such that they can add value incrementally. Rapid delivery of high quality, valuable software is what agile methods is all about, and bringing automated deployment into the CI process is a natural extension of any agile process.
Just as Scrum introduced an “expanded definition of done”, the definition of "integrated" is expanding. Deployment is traditionally considered something that happens after the CI build. “Build, test, deploy”. I think we’re all coming around to the realization that we really need a “build, deploy, test” CI process in which we deploy the application to a staging environment before we run our functional tests, for example. Since this deployment is part of the CI build, this has to be automated. If we’ve solved deployment to a staging environment, why not deploy to production?
SmartFrog’s Patterns of Deployment includes the CI server right in the definition of continuous deployment. This theme resonated at CITCON NA 2009, with the discussions frequently turning to configuration and deployment. I mentioned I’d post the outcome of the discussion I facilitated on continuous deployment we had there. Here’s the first part, a summary of the top ten indications that continuous deployment makes sense for your team:
* To more quickly add value; get the code into the hands of users; remove inventory - inventory is waste.
* To elicit faster user feedback about defects, quality of the application, and value. Bugs should be fixed. Quality issues should be addressed immediately. Features that don't add value should be dropped.
* For rehearsal; practice shipping to production using the same automation used to ship to a staging environment. Remove bottlenecks through early identification.
* The organization values happiness. How do people want to spend their time? Fighting fires and fixing production bugs isn't fun. Why not spend the time up front to automate, rehearse, and avoid the "Last Mile Problem" that occurs when code piles up and before shipping? The organization must be willing and empowered to acknowledge and remove roadblocks.
* The cost of a failed deployment isn't catastrophic - manual review gates in between automated steps might be deemed necessary as a safety valve in order to catch errors that can't be automatically detected in the test scripts. Automation reduces risk, though, so even apps with a high cost of failure can benefit from automation as long as there is an element of control and auditing.
* Support costs of a new release are low. Companies don't want to support hundreds of versions of a COTS product on store shelves, whereas a hosted web application effectively only has one live version at any given time.
*The licensing / revenue model is compatible with frequent releases. Subscription models, free upgrades within a minor revision, etc.
* The users cost of consumption is low, or users can opt to not upgrade. The previous point was about the cost of a deployment to the developing company. This point is about the cost, whether cognitive or financial, for consumers to make use of the new features as they roll out. An application that requires extensive training is obviously harder to continuously deploy than a chat avatar, for example.
* There aren’t complex legal issues with deployment. A new feature might have regulatory implications. Perhaps SOX compliance must be proven before a feature can be released. Situations like this aren’t the norm, but they do happen.
* The team is capable of excellence. One unreliable release will cost trust, user loyalty, and team morale. Too many unreliable releases will cost the product line and possibly the company. This is true regardless of whether or not automation used, obviously, but when we’re talking about continuous deployment with releases that can happen many times a day, we’re hopefully talking about feature releases, not just bug fixes. A solid development process that includes CI is a pre-requisite; I wouldn’t recommend continuous deployment to amateurs who haven’t yet written their first unit test.
With the justifications for deployment being a core part of continuous integration, I’m up for moving on to how it’s done. Now that distributed enterprise CI servers are becoming the norm, we can have a deployment agent on the machines in our QA and production environments that can grab our tested artifacts and run the same deployment scripts used in testing, for example. This is only one approach, though, and I’ll save a more in-depth discussion for another post.Tweet