By multiplying milestones, we transform a long, amorphous race into one with many intermediate ‘finish lines’. As we push through each one, we experience a burst of pride as well as a jolt of energy to charge towards the next one.
—Chip and Dan Heath, The Power of Moments
Facilitating SAFe Assessments
Note: This article is part of Extended SAFe Guidance and represents official SAFe content that cannot be accessed directly from the Big Picture.
The Lean-Agile transformation and the journey to Business Agility is a significant undertaking for every Enterprise. Many executives have commented that this transformation was one of the most difficult but most rewarding changes they have personally experienced in their careers.
The business benefits of business agility are clear: faster time to market for more innovative solutions; higher quality and productivity; higher levels of employee engagement; opportunity for a new and enhanced culture, and ultimately, the ability to thrive in the digital age. But for those new to the endeavor, the question arises about where and how to begin. The SAFe Implementation Roadmap provides a proven pattern of activities to adopt SAFe successfully. And yet, even when moving down the roadmap, the question for the enterprise becomes: How do we know how we are doing? Are we growing in the right areas? What do we do about the deficiencies we know we have? Where should we target our next effort?
To reinforce and accelerate the SAFe transformation, leaders need to ‘measure and grow’ the implementation at various points along the journey. This will help maintain the energy and enthusiasm they are devoting to the short cycles of Iterations and PIs while setting their sights on the larger goals of true business agility. The SAFe assessments are key in identifying these improvement opportunities, and this article guides facilitating them successfully.
‘Measure and grow’ is the term we use to describe how SAFe portfolios evaluate their progress toward business agility and determine the next improvement steps. It describes how to measure the current state of a portfolio and grow to improve overall business outcomes. The Measure and Grow article describes how this is achieved via three measurement domains: Flow, Outcomes, and Competency. This article focuses on the competency domain, measured via two separate assessment mechanisms designed for significantly different audiences and purposes.
- The SAFe Business Agility Assessment (Figure 1) is designed for LPM and portfolio stakeholders to assess their overall progress on the ultimate goal of true business agility.
- The SAFe Core Competency Assessments (Figure 2) help teams and trains improve on the technical and business practices they need to help the portfolio achieve that larger goal.
Each assessment follows a standard process pattern of running the assessment, analyzing the results, taking action, and, celebrating the victories.
SAFe Business Agility Assessment
The SAFe Business Agility Assessment is a high-level assessment that summarizes how Agile the business is at any point in time. The assessment report provides a visualization that shows progress measurements across the SAFe seven core competencies. An example report is shown below in Figure 1.
You can download the spreadsheet version of the assessment here.
Note for SAFe Studio Members: All the SAFe assessments are available for SAFe Studio Members online through our partner Comparative Agility. This provides additional data collection, analysis, comparison, and trending capabilities that can be used to improve performance. Access these from the Measure and Grow SAFe Studio page.
Running the Business Agility Assessment
Assessing business agility status is not a trivial feat. Opinions abound, the data is lumpy, and the ways of working are evolving simultaneously as the assessment is taking place. Therefore, simply sending the assessment out to various participants and asking them to fill in the data will probably not provide the right experience or accurate results. Instead, we recommend a facilitated session with someone trained in the nuances of SAFe and the assessment process. An experienced SAFe Practice Consultant (SPC) is probably a good choice.
Two assessment patterns can be used:
- Each participant fills out the assessment independently, and then the group discusses and analyzes the results together
- All participants discuss each statement together and reach a consensus on the score for each statement
Both patterns have their benefits and disadvantages. Trust the facilitator to pick the correct pattern based on group dynamics, distribution, and time frame.
Analyzing Business Agility Assessment Results
With the data from the assessment in hand, the next step is to analyze the results. During the analysis, it is essential to identify significant variances in opinion. The facilitator should review each area of disagreement and explore the differing views. These might stem from a different understanding of the statement itself or from disagreement about where the group is in the specific dimension. The goal is to explore the differences to get a better alignment of where improvement is needed. This is a significant part of the collaborative learning experience.
Core competencies that the group has assessed as problematic can then be explored to understand the reasons that drove people to score themselves low. In addition to pointing out areas needing improvement, the assessment allows portfolios to see visible improvements in performance or ‘wins.’ The wins are small multiplying milestones that encourage teams to consolidate those gains and produce more change, as Kotter’s model  suggests.
The facilitator should also be aware of the Dunning-Kruger effect , in which people tend to assess their ability as greater than what it really is. This means that core competencies that seem unnaturally high might also require an examination to ensure the group understands the meaning of the statements in question.
Taking Action on the Business Agility Assessment
Although high level, taking the business agility assessment is, in itself, a learning experience. Many questions directly set expectations of behaviors, activities, or outcomes that can be reasoned about and discussed. For example, a question about continuous learning, such as “the organization provides time and space for innovation,” is relatively straightforward, and the implied corrective action is obvious.
Figure 1 shows the enterprise scored low in Lean Portfolio Management (LPM). That could be because they are ineffective at it, but it’s more probable that the enterprise hasn’t started that part of the journey yet. In most cases, a quick look at the implementation roadmap will identify some fairly obvious next steps, with the goal of steadily improving proficiency across all seven core competencies.
LPM or the LACE should routinely re-evaluate their progress toward business agility, perhaps every other PI, and plan the next steps. The measurement frequency depends on the opportunities pursued and how fast the portfolio can reasonably achieve progress. Creating a baseline early on in the transformation, followed by periodic assessments, will illustrate improvement trends and allow everyone to communicate successes.
SAFe Core Competency Assessments
In most cases, assessing progress toward business agility spurs the enterprise to greater and more profound efforts. That leads the business to explore and start to measure and take more specific action on some or all of the seven core competencies. Structured similarly to the business agility assessment, each core competency assessment has a set of statements, organized by dimension, rated on the same scale as the above. The questions go one step deeper to specific aspects and areas of opportunity and concern along each of the three dimensions of that particular competency. An example report is illustrated in Figure 2.
Running a Core Competency Assessment
As with the business agility assessment, the scope, audience, and process for an individual competency assessment must be purpose-built. Low results in the Agile product delivery competency might require that each Agile Release Train (ART) in the portfolio assess its progress in that dimension. Or perhaps an LPM assessment needs to bring in the right stakeholders. In any case, all the guidance and caveats above apply, and attention to culture and careful facilitation is necessary to get the right experience and results.
(Note: These more detailed Core Competency assessments can be downloaded from the bottom of this article.)
Analyzing Results of a Core Competency Assessment
The results of a competency assessment are summarized along three dimensions. But again, there is far more detail in the assessment, and far more learnings than the figure alone implies. For example, here’s a sample of questions from the Built-in Quality dimension, which by themselves inform stakeholders and indicate improvement activities:
- Our team adheres to well-defined quality standards
- Our team practices both pairing and peer review
- Our team applies collective ownership to our work
- Our team’s testing practices catch defects early
Taking the assessment, whether business agility or core competency, is not a mechanical effort. It’s a fostered collaboration filled with learning, and it sets expectations and communicates intent. Therefore, even the simple act of taking the assessment will be a significant step towards improvement.
Additionally, it can be helpful to analyze the data in the following three ways:
- Highest and lowest average scores: Highest average scores represent those areas where there is the greatest success. Identifying these can help to highlight the results of previous improvement efforts, and these strengths can be amplified further as required. The lowest average scores likely represent candidates for the next areas of improvement.
- Most and least standard deviation: Often, the assessments will highlight differences in opinion. Comparing the standard deviation across the responses will illustrate where there is broad agreement on the progress being made and areas where there is disagreement. The latter warrants further investigation as it may point to siloed improvement efforts or challenges with communication or consistency of practice.
- Comparison to a benchmark: One of the significant benefits of the assessments is that they can be used to show improvement trends over time. Comparing against a previous data set will immediately demonstrate whether our improvement efforts have successfully delivered the expected benefits.
To help manage WIP and focus improvement efforts, it’s recommended to use this analysis to identify no more than five assessment statements that represent strengths and five statements that represent opportunities. The strengths will be amplified and celebrated, and the opportunities will be acted upon, as described in the next step.
Taking Action from a Core Competency Assessment
The next step is to identify the activities that will increase proficiency. We refer to these as growth recommendations. Collectively brainstorm growth recommendations for each statement that represents an opportunity, then affinity group and dot vote to get to 1-3 growth recommendations per statement (Figure 3). Rarely is it the case that a single growth recommendation can address all the challenges highlighted in an assessment statement, and therefore multiple activities will ensure that all facets are covered.
Finally, across all the assessment statements being addressed, these growth recommendations become potential backlog items that can be prioritized as described in the next step:
Prioritizing Improvement Opportunities
To limit Work in Process (WIP) (see SAFe Principle #6) and ensure that something does indeed get done, it is helpful to prioritize the opportunities and choose one or two that will provide the most value immediately. Like prioritizing features in the ART Backlog with Weighted Shortest Job First (WSJF), the same approach can be used to identify the best next growth opportunity to pursue.
A simple table to compare opportunities via WSJF is shown in Figure 4 below.
This approach will help the group select the improvement opportunities that yield the biggest impact with the least effort.
The prioritized opportunities go into the LACE backlog, the Portfolio Backlog, or the ART backlog to be worked on as soon as possible. The backlog of choice depends on the opportunity. For example, an opportunity to restructure ARTs by running a Value Stream and ART Identification Workshop will likely be on the LACE backlog, while a recommendation to train all Scrum Masters/Team Coaches might belong in the program or the portfolio backlog.
Core Competency Assessment Downloads
The following table provides download links for each of the core competency assessments.
|Core competency assessments download
|Lean Portfolio Management
|Enterprise Solution Delivery
|Agile Product Delivery
|Team and Technical Agility
|Continuous Learning Culture
Lastly, change is hard. Continuous change is more challenging. Intelligent enterprises use small wins to celebrate progress and inspire people to the next achievement milestone. There are many opportunities to celebrate: such as when a portfolio, ART, or team moves from one level to the next in each assessment dimension or perhaps even manages to change a single assessment statement from ‘mostly false’ to ‘mostly true.’ Celebrating successes creates the fuel needed for more improvement and advancement on the journey toward business agility.
These milestones can also provide an opportunity for organizations to gamify the business agility journey. This, in turn, can motivate individuals and teams to intensify their focus on the activities that will help them achieve their goals.
In addition, tying the improvement to changes in the Value Stream KPIs and LPM metrics connects the effort to the portfolio’s measures of overall success. In this way, the entire portfolio can focus on measurement and celebrate growth and positive outcomes.
Learn More Heath, Chip, and Dan Heath. The Power of Moments: Why Certain Experiences Have Extraordinary Impact. Simon & Schuster, 2017.  Kotter, John P. Leading Change. Harvard Business Review Press, 2012.  en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
Last update: 24 October 2022