In order for you to keep up with customer demand, you need to create a deployment pipeline. You need to get everything in version control. You need to automate the entire environment creation process. You need a deployment pipeline where you can create test and production environments, and then deploy code into them, entirely on demand.

Erik To Grasshopper, The Phoenix Project [2]

 

DevOps Abstract

Real, tangible software development value occurs only when the end-users are successfully operating the software in their environment. This demands that the complex routine of deploying to operations receives early and meaningful attention during development. To ensure a faster flow of value to the end user, this article suggests mechanisms for tighter integration of development and deployment operations (typically referred to as “DevOps”). This is accomplished, in part, by integrating personnel from the operations team with the Agile teams in the Agile Release Train. We also provide specific suggestions for continuously maintaining deployment readiness throughout the feature development timeline. In turn, this gives the enterprise the ability to deploy to production more frequently, and thereby lead its industry with the sustainably shortest lead time.

Details

Deployment Operations is Integral to the Value Stream

The goal of software engineering is to deliver usable and reliable software products to the end users. Lean and Agile both emphasize the ability to do so more frequently and reliably.  “Leaning” the software development process helps development shops gradually establish faster development flow by systemically reducing development cycle time and introducing “built-in quality” approaches.

However in most cases, development teams still deliver software to deployment in large batches. There, the actual deployment and release of the new software is likely to be manual, error prone and unpredictable, which adversely affects release date commitments and delivered quality. To enable organizations to effectively deliver value to the business, a leaner approach must be applied, which incorporates smaller batch sizes and includes deployment readiness from feature definition all the way through to the point where users actually benefit. Thereby we can move closer to continuous delivery, which is an end goal for many enterprises.

Note: For more on this topic, see the SAFe guidance article: Continuous Delivery. Also, see this Video by Scott Prugh from DevOps Enterprise Summit 2014.

Deployment Operations Must be “On the Train”

In SAFe, Value Streams cover all steps of software value creation and delivery. The Agile Release Train,  a self-organized team of Agile teams, is designed to enable effective flow of value in a particular value stream via a steady stream of Releases. In order to establish an effective deployment pipeline and process, it is crucial that the Deployment Operations team members are active on the train and are fully engaged in the process. This team, consisting of system administrators, DBAs, operational engineers, network and storage engineers, and others – those who are traditionally responsible for deploying the solution and keeping it running – must operate in the shared, real-time development and delivery context of the Agile Release Train.

Being part of the Agile Release Train means participating in program level events, interacting with other teams, System Architects and Business Owners, and maintaining their backlog of activities aligned with Program PI Objectives. This implies some level of participation of the deployment operations team in the activities of PI Planning, backlog refinement, Scrum of Scrums, System Demo, and Inspect and Adapt.

It is also important that the pipeline of deployable work is visualized, so that development teams and deployment operations can work collaboratively to ensure the flow of value from the time the code is conceived until it gets actually deployed and used.

Six Recommended Practices for Building your Deployment Pipeline

In order to establish an effective deployment pipeline – a continuous flow of new value from development to and through deployment – we recommend six specific practices in the following sections. Before we do so, however, we must have an understating of the broader, and more automated,  environment necessary to achieve these results. Figure 1 illustrates such an environment.

Figure 1. Deployment pipeline environment

We can see from Figure 1 that there are three main processes that must be supported:

(a)    Automatically fetching all necessary artifacts from version control, including the code, scripts, tests, metadata and the like

(b)   Automatically integrating, deploying and validating the code in a staging environment; automated as far as possible

(c)    Deploying the validated build from staging to production, and validating the new system

#1 – Build and Maintain a Production-equivalent Staging Environment

Figure 1 illustrates a critical asset, a staging environment. The need for this environment is driven by the fact that the most development environments are typically quite different from production. There, for example, the application server is behind a firewall, which is preceded by a load balancer, the much larger scale production database is clustered, media content lives on separate servers, and more. During the handoff from development to production, Murphy’s Law will take effect: the deployment will fail and debugging and resolution will take an unpredictable amount of time. Instead, the company should build a staging environment, which has the same or similar hardware and supporting software systems as production. There, the program teams can continuously validate their software in preparation for deployment.

While achieving true production equivalency is not economically viable (it doesn’t makes sense to replicate the hundreds or thousands of servers required there), there are a variety of ways to achieve largely functional equivalence without such investment. For example, it may be sufficient to have only two instances of the application server instead of twenty, and a cheaper load balancer from the same vendor – all that may be necessary to ensure that new functionality is validated in a load-balanced environment.

#2 – Maintain Development and Test Environments to Better Match Production

The above is a solution to a different root cause, which itself can also be mitigated, that is that the various and multiple development environments required for development (especially distributed development, which is routine at scale) do not closely match production. Part of the reason for this is cost, which may be driven by practical economics, encumbered by a lack of understanding of the true cost of delay of deployment. So, for example, while it may not be practical to have a separate load balancer for every developer, software configurations can typically be affordably replicated across all environments. This can be accomplished by:

a)      Propagating all changes in production, such as component or supporting systems upgrades, configurations, changes in system metadata, etc., back to the other environments.

b)      Faithfully propagating and persisting new development initiated configuration/environment changes to production during each deployment.

In both cases, configuration changes need to be captured in version control, and all new actions required to enable the deployment flow should be documented in scripts and automated wherever possible.

#3 – Deploy to Staging every Sprint; Deploy to Production Frequently

3a: Deploy to Staging Every Sprint

It is not possible to objectively understand the true state of any increment unless it is truly deployment ready. To better assure this, one suggestion is a simple rule: do all fortnightly System Demos from the Staging environment. In that way, deployability becomes part of Definition of Done for every user story, resulting in potentially deployable software every sprint.

3b: Deploy to Production Frequently

And while continuous deployment readiness is critical to establishing a reliable software delivery process, the real benefits in shortening lead time come from actually deploying to production more frequently. This also helps eliminate long-lived production support branches and the consequential extra effort required to merge and synchronize changes across all the instances.

However, while Figure 1 shows a fully automated deployment model, we must also consider release governance before we allow an “automatic push” to production. This is typically under the authority of the Release Management, where considerations include:

-      Conceptual integrity of the feature set, such that it meets a holistic set of user needs (see Release)

-      Evaluation of any quality, regulatory, security or other such requirements

-      Potential impacts to customers, channels, production, and any external market timing considerations

#4 – Put Everything Under Version Control

In order to be able to reliably deploy to production, all deployable artifacts, metadata and other supporting configuration items must be maintained under version control. This includes the new code, all required data (dictionaries, scripts, lookups, mappings, etc.), all libraries and external assemblies, configuration files or databases, application or database servers – everything that may realistically be updated or modified – needs to be under version control.

This approach also applies to the test data, which has to be manageable enough for the teams to update every time they introduce a new test scenario or update an existing one. Keeping the test data under version control provides quick and reliable feedback on the latest build by allowing for repeatedly running the tests in a deterministic test environment.

#5 – Start Creating the Ability to Automatically Build Environments

Many deployment problems arise from the error prone, manually intensive routines that are required to build the actual run time environments. These include preparing the operating environment, applications and data, configuring the artifacts, initiating the required jobs in the system and its supporting systems. In order to establish a reliable deployment process, the environment setup process itself needs to be fully automated. This automation can be substantially facilitated by using virtualization, using Infrastructure as a Service (IaaS) and applying special frameworks for automating the configuration management jobs.

#6 – Start Automating the Actual Deployment Process

Lastly, it should be obvious that the actual deployment process itself also requires automation. This includes all the steps in the flow including: building the code, creating the test environments, executing the automated tests, and deploying and validating verified code, and associated systems and utilities in the target (development, staging, and production) environment. This final, critical automation step is achievable only via an incremental process, one that requires the full commitment and support of the organization, as well as creativity and pragmatism as the teams prioritize target areas for automation. Kaizen.

Deployability and Solution Architecture

A Lean, systems approach to continuous deployment readiness requires understanding that the code itself, as well as configurations, integration scripts, automated tests, deployment scripts, metadata and supporting systems are equally important pieces of the entire solution. Faster achievement of user and business goals can be achieved only when the whole solution context is considered.

As with the other Nonfunctional Requirements, designing for deployability requires intentional design effort via collaboration of System Architects, Agile teams and deployment operations towards the goal of a fast and flawless delivery process, requiring minimal manual effort. In addition, in order to support the adoption of more effective deployment techniques, the organization may need to undertake certain enterprise level Architectural Epic initiatives (common technology stacks, tools, data preparation and normalization, third party interfaces and data exchange, logging and reporting tools, etc.) that gradually enhance architecture, infrastructure and other nonfunctional considerations in support of deployment readiness. Everything that jeopardizes or complicates the process of getting valuable software out the door must eventually be improved.

 


Learn More

[1] Humble, Jez, David Farley. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. Addison-Wesley, 2010.

[2] Kim, Gene, et al. The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win. IT Revolution Press, 2013.

[3] Scott Prugh. Continuous Delivery. /continuous-delivery/ . Also, see this Video on the same topic by Scott from DevOps Enterprise Summit 2014.

 

 

Last update 31 October, 2014

Leffingwell et al. © 2011-2014 Scaled Agile, Inc.

Information on this page is protected by US and International copyright laws. Neither images nor text can be copied from this site without the express written permission of the copyright holder. For permissions, please contact permissions@ScaledAgile.com.

XSLT by CarLake