Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product. Quality cannot be inspected into a product or service; it must be built into it.
—W. Edwards Deming
Ready to start learning?
Use our finder to explore current offerings or learn more about a specific course
Built-In Quality is a set of practices to help ensure that the outputs of Agile teams in business and technology domains meet appropriate quality standards throughout the process of creating customer value.
To support Business Agility, enterprises must continually respond to market changes. The quality of the work products that drive business value directly determines how quickly the teams can deliver their solutions. Although work products vary by domain, they are likely to involve software, hardware designs, scripts, configurations, images, marketing materials, contracts, and other elements. Products built on stable foundations that follow standards are easier to change and adapt. Built-in quality is even more critical for large solutions, as the cumulative effect of even minor defects and wrong assumptions may create unacceptable consequences.
Building quality in requires ongoing training and commitment. But the benefits warrant the investment and include:
Higher customer satisfaction
Improved velocity and delivery predictability
Better system performance
Improved ability to innovate, scale, and meet compliance requirements
Built-in quality is linked to the fast flow of value described in SAFe principle 6: Make value flow without interruptions. Accelerating problem discovery and taking corrective action occurs by shifting learning left on the timeline. Improved collaboration, workflow automation, more frequent delivery, and faster customer feedback support a quicker learning process.
SAFe applies Built-in Quality across five key domains. Each domain has a set of quality practices that vary from universally applicable generic practices to those specific to one or a few domains.
Figure 1 provides a consolidated view of Built-in Quality in SAFe.
The rest of this article describes the components of Figure 1 in deeper detail.
Built-in Quality Domains
Built-in Quality practices vary based on the domains in which they are applied. Despite the same intent behind the Built-in Quality approach to creating customer value, the actual practices reflect the intricacies of their environment and context. The following are the Built-in Quality domains in SAFe:
Business functions include marketing, sales, HR, finance, supply chain management, and other non-IT disciplines. Along with routine operations, each function also includes complex efforts requiring specific quality outputs for success. For example, creating a new marketing campaign or establishing new HR policies involve certain quality expectations.
Software is an essential contributor to business agility, the ability to scale the business, and better compete in the digital age. But seizing such opportunities requires maintaining predictable quality when delivering solutions.
IT infrastructure powers vast ecosystems of today’s enterprise solutions landscape. The more complex the solutions, the more sophisticated the IT systems must be to sustain them. To support the reliable operation of the enterprise, IT systems require substantial quality standards and, therefore, proper quality practices.
When used in computer technology, hardware typically refers to cables, monitors, integrated circuits, and other tangible elements of a computer system. But more generally, hardware refers to devices with concrete physical properties: mass, size, and matter. Examples include motors, gears, tools, chassis, cases, and simple or complex mechanisms. Due to their significantly higher cost of change, hardware systems require a unique approach to quality.
Cyber-physical systems are complex systems wherein multiple physical elements are controlled by software algorithms. Examples include robots, aircraft, and automobiles. These are some of the world’s most complex systems and often include intricate electrical, mechanical, optical, fluidic, sensory, and other subsystems. Their complexity and the high impact of failure emphasize the critical importance of quality in such systems.
Basic Agile Quality Practices
Basic Agile quality practices can be applied to work products in any domain. They have proven their worth and provide a common starting point for knowledge workers to understand and improve the quality attributes of the artifacts, work products, systems, and services that benefit themselves and their customers. A set of five SAFe Basic Agile Quality Practices are described in the sections below.
Shift Learning Left
Every development effort involves numerous unknowns that surface as development progresses and teams learn new facts. If the learning happens late in the process, underlying issues will significantly impact the solution, and significant rework and delays will result. However, if learning takes place much earlier—or is shifted left—problems reveal themselves sooner, enabling corrective action with minimum impact (Figure 2).
Shifting learning left does not simply mean that some actions take place earlier on the timeline but also that the structure of some of the basic processes is changed. For example, a test-first approach requires shifting away from conventional testing. Instead, tests are created whenever possible before the desired solution functions are implemented.
Pairing and Peer Review
Pair work describes a practice wherein two knowledge workers collaborate over the same asset in real time. Often, one serves as the driver, directly advancing the work product, while the other acts as the navigator, providing real-time evaluation and feedback. Team members switch roles frequently. Because the work product will contain each member’s shared knowledge, perspectives, and best practices, pairing creates and maintains higher quality. As teammates learn from each other, the skillsets of the entire team rise and broaden. Additionally, peer review helps spot quality issues as one team member examines the work products of the other. Many governance processes around software, for example, mandate peer review as a compliance activity.
Collective Ownership and T-shaped skills
Collective Ownership is a quality practice where individual team members have the requisite skills and authority to update any relevant asset. This approach reduces dependencies between teams and ensures that any individual team member or team will not block the fast flow of value delivery. Any individual can add functionality, fix errors, improve designs, or refactor because the work product is not owned by one team or individual. Collective ownership is supported by quality standards that encourage consistency, enabling everyone to understand and maintain the quality of each component. Collective ownership is further enabled by ‘T-shaped skills.’ T-shaped skills characterize individuals who possess deep experience in one area but also have broad skills in other areas. T-shaped skills also represent the ability to work well with others.
Artifact Standards and definition of done
Assets created and maintained by the organization must adhere to standards that help ensure their value to the business. These standards may reflect how the artifacts are being built or what properties they must manifest. Standards are often unique to the specific organization and solution context, emerging gradually, validated frequently, and corrected by multiple feedback cycles. To productively maintain artifact standards, the teams must understand the motivations for their existence. Artifact design practices and the effective use of automation help facilitate standards. Enacting productive artifact standards involves applying a definition of done (DoD) – an essential way of ensuring that a work product is complete and correct. Each team, train, and enterprise should build a DoD that suits their needs.
Workflows tend to have many manual steps. Handoffs from one worker to another, searching for an asset of interest, and manual inspection of an asset to a standard are just a few examples. The fact is, all these manual steps are error-prone and cause delays in the process. Many of these tasks can be automated if the teams take the time to invest in a more automated pipeline that supports the activities. Automation provides substantial gains due to reduced execution costs and intrinsic adherence to standards. Of course, this can be done incrementally, and it often starts by putting a Kanban system in place and then noting steps that can be automated. Sometimes the first step is simply setting up automated notifications when an item changes state. Even simpler, many such systems are designed as true pull systems where the worker simply checks the system to see what work is available to them based on its state. In this case, the handoff is automatic and doesn’t require separate communications overhead just to know the state of a work product.
Business Quality Standards
The above sections describe a set of five basic Agile quality practices that can be applied to every business domain. Virtually every aspect of business operations— accounting and finance, legal, sales, development, HR, marketing, operations, production, and more—is subject to internally or externally imposed quality standards, which are often linked to compliance requirements. Each business function produces specific outputs, which must satisfy quality standards relevant to that context.
No matter your business function, the steps to achieve quality with Agility include the following:
Organize into Agile teams, get trained, and iterate.
Define the standards and compliance policies for your function.
Agree on the definition of done (DoD) for artifacts and activities for your workflow.
Implement the basic Agile quality practices.
Measure and learn. Specialize Agile quality practices further to your specific function.
Agile Software Development Quality Practices
Software may well be the richest and best-defined area for applying Built-in Quality. This was driven by necessity, as software is exceedingly complex and intangible. You can’t touch it or see it, so traditional approaches to inspecting, measuring, and testing are inadequate. If quality isn’t built in endemically, then it’s unlikely to exist at all. To address this new challenge, many new quality practices like those above were inspired by Extreme Programming (XP), which has a zest for going fast with quality. They have proven their worth and have now started influencing quality practices in other domains. The practices below apply well to software development, and we will describe them in that context, but they can be applied to other domains as well.
Building large-scale value requires knowledge workers to build the system in increments, resulting in frequent small changes. Each must be continually checked for conflicts and errors and integrated with the rest of the system to assure compatibility and forward progress. Continuous Integration (CI) provides developers with fast feedback (Figure 3). Each change is quickly built, integrated, and then tested at multiple levels. CI automates the process of testing and migrating changes through different environments, notifying developers when tests fail.
Continuous integration is vital within and across teams, allowing them to quickly identify and resolve issues in all parts of the codebase.
Agile teams operate in a fast, flow-based system to develop and release high-quality business capabilities quickly. Instead of performing most of the testing at the end, Agile teams define and execute many tests early and often as a part of their integration process. Tests are defined for small units of code using Test-Driven Development (TDD), for Story, Feature, and Capability acceptance criteria using Behavior-Driven Development (BDD), and for the feature or capability benefit hypothesis using Lean UX (Figure 4). Building quality in ensures that Agile development’s frequent changes do not introduce new errors while enabling fast, reliable execution.
Constantly changing technology and evolving business objectives make it difficult to maintain and continually increase business value. However, two paths to the future exist:
Keep adding new functionality to an existing code base toward an eventually unmaintainable ‘throw-away’ state
Continuously refactor the system to build a foundation for efficiently delivering the current business value as well as future business value
Refactoring, which improves the internal structure or operation of an area of code without changing its external behavior, is better. With continuous refactoring, the useful life of an enterprise’s investment in software assets can be extended substantially, allowing users to benefit from a flow of value for years to come. But refactoring takes time, and the return on investment is not immediate, so an allowance for time and effort must be part of capacity planning considerations. For more, see the extended guidance article on Refactoring.
Continuous delivery provides the ability to release value to customers whenever they need it. This is accomplished by the Continuous Delivery Pipeline (CDP), which contains four aspects: continuous exploration, continuous integration, continuous deployment, and release on demand. The CDP enables organizations to map their current pipeline into a new structure and use relentless improvement to deliver value to customers. Feedback loops internally within and between the steps and externally between the customers and the enterprise fuel improvements. Internal feedback loops often center on process improvements; external loops often center on solution improvements. The improvements collectively create synergy, ensuring the enterprise is ‘building the right thing, the right way’ and frequently delivering value to the market. Additionally, SAFe DevOps features crucial practice domains for establishing fast and reliable value delivery mechanisms.
Continuous delivery helps SAFe teams release on demand. Releasing with quality, however, requires a specific, scalable definition of done that helps ensure that the requisite quality is built in. Figure 5 shows an example:
To support security practices, teams generate a Software Bill of Materials (SBOM) for each release describing the commercial and open-source components and dependencies to ensure no vulnerabilities.
Agile Architecture is a set of values, practices, and collaborations that support a system’s active, evolutionary design and architecture. It embraces the DevOps mindset, allowing a system’s architecture to evolve continuously while simultaneously supporting the needs of current users.
Agile architecture supports Agile development practices through collaboration, emergent design, intentional architecture, and design simplicity. It also enables designing for testability, deployability, and changeability. Rapid prototyping, set-based design, domain modeling, and decentralized innovation, in turn, support Agile architecture.
The essential concept of Architectural Runway allows Agile teams and trains to provide effective enablement for future business capabilities and features while progressively validating underlying architectural assumptions.
IT Systems Quality Practices
Every modern enterprise depends on properly functioning IT systems for its business success. With more and more business workflows being powered by IT, ensuring the reliability, scalability, safety, and security of IT systems becomes increasingly important. It requires a robust approach to building quality into these systems. A sample of IT-specific quality practices is described below.
Infrastructure as Code
One of the critical challenges in ensuring the quality of IT ecosystems comes from defining and sustaining configurations consistently. Often representing hundreds or even thousands of environment parameters, configurations grow out of sync and cause problems in different parts of the enterprise’s solution landscape. ‘Infrastructure as Code ‘is an approach to control those configurations programmatically and thus benefit fully from automation in defining, procuring, and maintaining configurations consistently and integrally. Containerization is an excellent enabler of Infrastructure as Code, as it permits applying programming interfaces to various aspects of the execution environment. Additionally, using ‘immutable infrastructure’—an approach where IT components are rebuilt whenever needed, rather than modified in production—forces the organization to explicitly control all changes to the environment by formally redefining them and redeploying the component that changed.
NFRs and SLAs
IT infrastructure must provide certain qualities to the execution environment to support the systems essential to business operation. These quality attributes include things such as security, reliability, performance, maintainability, and scalability (Nonfunctional Requirements or NFRs). Additionally, relevant Service-Level Agreements (SLAs), such as Mean Time Before Failure (MTBF) and Mean Time to Repair (MTTR), must be ensured. In SAFe, NFRs and SLAs are achieved incrementally by early and continuous testing and timely corrective action. Ensuring that systems meet their NFRS and SLAs requires instrumentation and the proactive build and use of the architectural runway.
Telemetry and Monitoring
Responding to unanticipated loads, security attacks, hardware, software, and network failures, require a range of options, from downgrading or removing services to adding service capacity. Telemetry and logging capabilities allow organizations to understand and fine-tune their architecture and operating systems to meet intended loads and usage patterns. Effective monitoring requires that full-stack telemetry is active for all features deployed through the CDP. Monitoring ensures that issues with system performance can be anticipated or addressed rapidly in production.
IT environments must meet increasingly stringent quality standards to protect against unauthorized access, use, disclosure, or destruction. The spectrum of activities to achieve comprehensive cybersecurity includes:
Frequent testing and validation (audits, penetration testing, etc.)
Training and proper habits for the workforce
Testing of all new assets for various vulnerabilities
Frequently review new vulnerability alerts against existing solution’s SBOM for affected components and provide patches or hotfixes
Recent advances in DevOps and related methods, practices, and tooling provide new opportunities for IT teams to automate governance. Automated governance replaces tedious, manual, and error-prone activities and specifically addresses security, compliance, and audit needs. For more on this topic, see the reference : Investments Unlimited, A Novel about DevOps, Audit Compliance, and Thriving in the Digital Age.
Automation of configuration management, audit, security testing (during both build and deployment), and immutable infrastructure help reduce human error that can lead to system vulnerabilities.
Agile Hardware Engineering Quality Practices
Ensuring quality in hardware systems and components is complicated because the cost of change increases with time, and the impact of quality issues with hardware is high. This can include catastrophic field failure, recalls of volumes of manufactured products, and expensive field replacement or repair. This risk pressures organizations to effectively apply Built-in Quality practices while developing engineered hardware systems and subsystems. There are several techniques organizations use to ensure Built-in-Quality in hardware systems, which are described below.
Modeling and Simulation
In Agile, the goal is to build and learn as quickly as possible. Modeling and simulation in the virtual environment—and rapid modeling in the prototype environment—help shift learning left, as shown in Figure 6.
Analysis and simulation in digital models used in electrical and mechanical Computer-Aided Design (CAD) and MBSE (see below) can test changes quickly and economically. Digital twins combine multiple virtual models with data harvested from telemetry in the operational systems to improve the models and better predict how systems will behave in the future. The feedback loops in Figure 6 show how data from other environments validate and improve the digital environment. Some aerospace and automotive products even use model simulations for certification, substantially reducing the time and cost of changes.
The virtual environment cannot reveal all issues. Physical prototypes are a lower-cost substitute for real, “bent metal” hardware. They provide higher-fidelity feedback, available only in a physical environment. Example prototype practices include:
Wood and other low-fidelity mockups
Breadboarding electrical components
3d-printed mechanical and electrical parts (PCBs, wiring harnesses)
Increasingly, additive manufacturing is used to lower the costs of rapid experimentation and prototyping. “Additive manufacturing uses data computer-aided-design (CAD) software or 3D object scanners to direct hardware to deposit material, layer upon layer, in precise geometric shapes. As its name implies, additive manufacturing adds material to create an object. By contrast, when you create an object by traditional means, it is often necessary to remove material through milling, machining, carving, shaping, or other means.”
Many organizations with the equipment and knowledge to ‘print’ mechanical and electrical parts can produce and ship them in a single day. And parts made with additive manufacturing are now making their way into production.
Cyber-physical Systems Quality Practices
Cyber-physical systems require an organization to deal effectively with hardware components and the software that governs its behavior. Additionally, because such systems operate directly in the real world, the impact of quality issues can be significant and often subject to regulatory compliance.
Model-Based Systems Engineering
Model-Based Systems Engineering (MBSE) is the practice of developing a set of related digital models that help define, design, and document a system under development. These models provide an efficient way to explore, update, and communicate system aspects to stakeholders while significantly reducing or eliminating dependence on traditional documents. By testing and validating system characteristics early with the model, they facilitate timely learning of properties and behaviors, enabling fast feedback on requirements and design decisions.
Frequent End-to-end Integration
In the software domain, continuous integration is the heartbeat of continuous delivery: It’s the forcing function that verifies changes and validates assumptions across the entire system. Agile teams invest in automation and infrastructure that builds, integrates, and tests every developer change, providing immediate feedback on errors.
Large, cyber-physical systems are far more challenging to integrate continuously because:
Long lead-time items may not be available
Integration spans organizational boundaries
Automation is rarely end-to-end
The laws of physics dictate certain limitations
Instead, frequent end-to-end integration addresses the economic tradeoffs of the transaction cost of integrating versus delayed knowledge and feedback (Figure 7).
The goal is frequent partial integration with at least one complete solution integration for each PI.
Beal, Helen and Bill Bensing , Jason Cox , Michael Edenzon , John Willis. Investments Unlimited: A Novel about Devops, Security, Audit Compliance, and Thriving in the Digital Age, IT Revolution Press, 2022