The most important problem that we face as software professionals is this: If somebody thinks of a good idea, how do we deliver it to users as quickly as possible?[1]

—Continuous Delivery


CALMR is the second article in the SAFe DevOps series. It describes the shared mindset and values that support successful DevOps adoption. The following links provide access to the next two articles: the DevOps home page and SAFe’s DevOps Practice Domains.

CALMR is a DevOps mindset that guides the ART toward achieving continuous value delivery by enhancing culture, automation, lean flow, measurement, and recovery.

Successful DevOps hinges on an approach that unites everyone in the value stream toward achieving extraordinary business outcomes. In SAFe, CALMR provides such an approach. When everyone in the value stream thinks and acts with continuous delivery in mind, the result is:

  • Increased frequency, quality, and security of product innovation
  • Decreased deployment risk with accelerated learning cycles
  • Accelerated time-to-market
  • Improved solution quality and shortened lead time for fixes
  • Reduced severity and frequency of defects and failures
  • Improved Mean Time to Recover (MTTR) from production incidents

A critical component of the CALMR mindset is the realization that DevOps often forces significant change within established enterprises. Enterprises are complex systems with diverse people, values, processes, policies, and technology. Careful attention must be given to effectively cultivating and maturing DevOps in these environments.

After more than a decade of experimentation and learning, the DevOps community has discovered that effective DevOps entails a deep appreciation for culture, automation, lean flow, measurement, and sharing (CALMS). DevOps requires directing energy toward each area—not necessarily equally, but in balance—to achieve desired outcomes.

SAFe echoes this belief with one modification, sharing is a natural component of culture, which makes room for ‘recovery’ as a new element. Hence, SAFe’s ‘CALMR’ approach to DevOps (Figure 1).

Figure 1. SAFe’s CALMR approach to DevOps
Figure 1. SAFe’s CALMR approach to DevOps

CALMR includes five elements that define DevOps excellence. These elements guide the decisions and actions of everyone involved in enabling continuous value delivery.



In SAFe, DevOps leverages the culture created by adopting the entire Framework’s Lean-Agile values, principles, and practices. Every tenet of SAFe, from Principle #1 – Take an economic view to Principle #10 – Organize around value, applies to DevOps. It enables shifting some operational responsibilities upstream while following development work downstream into deployment and operating and monitoring the solution in production. Such a culture requires:

  • Customer-centricity – Value is determined by an enterprise’s ability to sense and respond to customer needs; therefore, everyone in the value stream must be guided by a shared understanding of their customers.    
  • Collaboration – DevOps relies on the ability of development, operations, security, and other teams to partner effectively on an ongoing basis, ensuring that solutions are developed, delivered, and maintained in lockstep with ever-changing business needs.
  • Risk tolerance – DevOps requires widespread acknowledgment that every release is an experiment until validated by Customers and that many experiments fail. DevOps cultures reward risk-taking, continuous learning, and relentless improvement.
  • Knowledge sharing – Sharing ideas, discoveries, practices, tools, and learning across teams, ARTs, and the broader organization unifies the enterprise and enables skills to shift left.


DevOps recognizes that manual processes are the enemy of fast value delivery, high productivity, and safety. Manual processes increase the probability of errors in the delivery pipeline, particularly at scale. These errors, in turn, cause rework, which delays desired outcomes.

Automating the Continuous Delivery Pipeline (CDP) via an integrated ‘tool chain’ (Figure 2) accelerates processing time and shrinks feedback cycles. This feedback—from customers, stakeholders, solutions, infrastructure, and the pipeline— provides objective evidence that solutions are (or are not) delivering the expected value.

Figure 2. A Conceptual CDP toolchain
Figure 2. A Conceptual CDP toolchain

Building and operating a CDP tool chain typically involves the following categories of tools:

  • Value Stream Management (VSM) – VSM tools’ wrap’ the CDP from end to end, providing real-time visibility into the health and efficiency of the value stream itself.
  • Version Control – These tools store and manage changes to source code and configuration files that define the behavior of solutions, systems, and infrastructure.
  • Infrastructure as code (IaC) – As a discipline, infrastructure-as-code treats all systems as highly configurable, expendable commodities. Tools in this category enable all computing infrastructure to be built, deployed, changed, and destroyed on demand.
  • Test Automation – Test automation can be a significant source of delivery acceleration. It applies to almost all testing types, including unit, component, integration, regression, performance, acceptance, usability, and exploratory testing. However, automating exploratory testing requires manual input from those testing the solution.
  • Vulnerability Detection – These tools span much of the CDP and are specifically designed to detect security vulnerabilities in code, networks, and infrastructure.
  • CI/CD Continuous Integration (CI) and Continuous Deployment (CD) tools are typically invoked automatically upon code commit and orchestrate build, integration, testing, compliance, and deployment activities.
  • Monitoring and Analytics – These tools collect usage and performance data from all levels of the solution stack and provide critical insights into pipeline flow, solution quality, and delivered value.
  • Additional tools – The tools above tend to be used universally; however, many others support DevOps but are implementation specific. These include IDE plugins, microservices, artifact repositories, cloud management, and chaos engineering.

Lean Flow

Agile Teams and ARTs strive to achieve a state of continuous flow, enabling new features to move quickly from concept to cash. The key to accelerating flow is reflected in Principle #6 – Make value flow without interruption. Faster flow can best be achieved by adopting all eight ‘flow accelerators’ described here.

These powerful accelerators of value apply to all Framework levels, but the challenges differ for each. An individual SAFe article discusses how to use these accelerators for each flow domain: Team FlowART FlowSolution Train Flow, and Portfolio Flow. Three of these accelerators, however, are particularly relevant to the ongoing optimization of the CDP. Each is described below in the context of DevOps.

Figure 3. The ART Kanban helps visualize and limit WIP
Figure 3. The ART Kanban helps visualize and limit WIP
  • Visualize and limit Work in Process (WIP) – Figure 3 illustrates an example of an ART Kanban board, which makes WIP visible to all stakeholders. Kanban boards help teams quickly identify bottlenecks and balance the amount of WIP against the available development and operations capacity.
  • Work in smaller batches – Small batches go through the system faster and with less variability than large batches. This accelerator supports more frequent deployments and speeds up the learning process. Reducing batch sizes typically involves focusing more attention on and increasing investment in infrastructure and automation that reduces the transaction cost of each batch.
  • Reduce queue lengths – The size of the queue of work to be done is a predictor of the amount of time it will take to complete the job, no matter how efficiently it is processed. Fast flow is achieved by closely managing and generally reducing queue lengths—the shorter the queue, the quicker the delivery.


Achieving extraordinary business outcomes with DevOps requires the CDP to be highly optimized. Solutions, the systems on which they run, and the processes by which they are delivered and supported have to be frequently fine-tuned for maximum performance and value.

The decisions of what to optimize, how to optimize, and how frequently can be guided by Principle #1 – Take an economic view and Principle #5 – Base milestones on an objective evaluation of working systems, not merely intuition. The ability to accurately measure delivery effectiveness and feed that information into relentless improvement efforts is critical to the success of DevOps.

The next question is, what metrics should be tracked, and from what sources? While every enterprise and delivery pipeline is somewhat different, the following guidelines apply universally.

  • Measure pipeline flow – The health of the delivery pipeline itself can make or break a solution. The Development Value Stream needs to evolve into a CDP to achieve business agility.

Flow measurements, described in Measure & Grow, focus on throughput and lead time from concept (Customer request) to cash (delivery to the Customer) and are derived from the people and tools that perform design, development, testing, deployment, and release activities. For example:

    • The Flow Framework defines four flow metrics–flow velocity, flow efficiency, flow time, and flow load–and tracks the “Flow Distribution” of features, defects, risks, and debts in the pipeline.[2]
    • Google considers end-to-end lead time and deployment frequency metrics the most crucial pipeline performance indicators. [3]
  • Measure solution quality – DevOps cultures stress the importance of shifting technical practices left (earlier). This practice ensures that quality is built into solutions during development rather than ‘inspected in’ as defects are discovered later.

Quality metrics gauge adherence to functional, nonfunctional, security, and compliance requirements, which are best obtained via automated testing tools before release. The Flow Framework categorizes these as quality metrics [2], while Google focuses specifically on change failure rates. [3]

  • Measure solution value – A streamlined pipeline is worthless if it simply accelerates the delivery of products nobody wants. Therefore, measuring the business value of the work exiting the pipeline is essential. These metrics gauge economic outcomes and Customer (or end-user) satisfaction, which are evaluated against forecasted results defined as part of the original business hypothesis.

Value metrics are sourced from full-stack telemetry, analytics engines, financial systems, and feedback from users and stakeholders. The Flow Framework presents these as ‘business results’ with specific metrics to track the value, cost, and happiness.[2] Google adds time to restore—equivalent to the well-known Mean Time to Restore (MTTR)—since production failures can rapidly diminish the value of deployed solutions. [3]


It’s imperative to design the CDP for low-risk releases and fast recovery from operational failure to support frequent and sustained value delivery. The Release on Demand article describes techniques for a more flexible release process. In addition, the following practices support fast recovery:

  • Stop-the-line mentality – With a ‘stop-the-line’ mentality, any issue compromising solution value causes team members to stop what they are doing and swarm on the problem resolution. Learnings are then turned into permanent fixes to prevent the issue from recurring.
  • Plan for and rehearse failures – When it comes to DevOps, failed deployments can still occur. To minimize and maximize the resiliency of solutions, teams should develop recovery plans and practice them often in production or production-like environments. (See ‘Chaos Monkey’ [4].)
  • Fast fix forward and roll back – Since production failures are inevitable, teams need to develop the capability to quickly ‘fix forward’ and, where necessary, move back to a known stable state. Fixes must flow through the same process as any feature or enhancement; therefore, the CDP should accommodate any type of change at any level of severity.

Architecture, infrastructure, and skills challenges typically need significant improvement to enable fast, elegant recovery. Organizations often undertake special enterprise-level initiatives to evolve these capabilities.

More in the DevOps Series

Article 1: DevOps Main Page

Article 2: A CALMR Approach to DevOps (this page)

Article 3: SAFe’s DevOps Practice Domains


Learn More

[1] Humble, Jez, and David Farley. Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. Addison-Wesley, 2010.

[2] Kersten, Mik. Project to Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework. IT Revolution Press, 2018.

[3] Accelerate – State of DevOps 2019.

[4] The Netflix Simian Army.


Last update: 14 March 2023