All models are wrong, but some are useful.

—George E. P. Box, “Robustness in the Strategy of Scientific Model Building” [1]

Model-Based Systems Engineering

Definition: Model-Based Systems Engineering (MBSE) is the practice of developing a set of related models that help define, design, simulate, and document a system under development.

MBSE is the application of modeling systems as a cost-effective way to explore and document system characteristics. By testing and validating system characteristics early, models facilitate timely learning of properties and behaviors, enabling fast feedback on requirements and design decisions. Models provide an efficient way to explore, update, and communicate system aspects to stakeholders while significantly reducing or eliminating dependence on traditional documents.

MBSE historically focused on expressing and recording requirements, design, analysis, and verification information [2]. As modeling technology matures, it provides even more value by accelerating learning (e.g., simulation) and providing better insights into the physical world (e.g., digital twins) [3]. Both are important to evolve live systems and enable Enterprise Solution Delivery.

Although models are not a perfect representation of a system, they provide knowledge and feedback sooner and more cost-effectively than implementation alone. And they allow the simulation of complex system and system-of-systems interactions with appropriate fidelity to accelerate learning. In practice, engineers use models to gain knowledge and to serve as a guide for system implementation. In some cases, they use them to directly build the actual implementation (e.g. electrical CAD, mechanical CAD).

Details

Lean practices support fast learning through a continuous flow of development work to gain fast feedback on decisions. MBSE is a discipline and a Lean tool that allows engineers to quickly and incrementally learn about the system under development before the cost of change gets too high.

Models are used to explore system elements’ structure, behavior, and operational characteristics, evaluate design alternatives, and validate assumptions faster and earlier in the system life cycle. This is particularly useful for large and complex systems—satellites, aircraft, medical systems, and the like—where the solution must be proven practical beyond all possible doubt before, for example, launching into space or connecting to the first patient. Models also record and communicate decisions that will be useful to others. This information serves as documentation for Compliance, impact analysis, and other needs. In SAFe, model information is recorded as part of the Solution Intent, most often created by the work of Enablers.

The following sections provide guidance on adopting MBSE.

Figure 1. MBSE accelerates learning
Figure 1. MBSE accelerates learning

Explore Alternatives and Learn Faster

Modeling supports fast learning cycles (see SAFe Principle #4 – Build incrementally with fast, integrated learning cycles) and helps mitigate risks early in the product life cycle. Models facilitate early learning by testing and validating specific system characteristics, properties, or behaviors, enabling fast feedback on design decisions.

Dynamic, solid, graphs, equations, simulation, and prototypes—models come in many forms. As Figure 2 illustrates, each provides a different perspective on one or more system characteristics that enable the creation of future Capabilities and Features.

Figure 2. Models and learning cycles
Figure 2. Models and learning cycles

Models may predict performance (response time, reliability) or physical properties (heat, radiation, strength). Or they may explore design alternatives for user experience or response to an external stimulus. But models aren’t the only approach to exploring design alternatives. Design Thinking and user-centered design are synergistic with MBSE and also help validate assumptions sooner.

Models Bridge Physical and Virtual Worlds

Digital twin technology supports MBSE. A digital twin is a virtual instance of a physical system synchronized with operational data, such as performance, functionality, maintenance, NFRs, and system health. Integrating the physical and virtual worlds validates virtual models and helps engineers improve system analysis, predict failures or downtime, and provide more accurate maintenance schedules.

Digital twins support business agility by better predicting when future enhancements and product upgrades will be necessary to make Solution Roadmaps more accurate. And they can uncover new business opportunities by making learning faster, cheaper, and more reliable. See [4] for an overview of digital twins.

Support Compliance and Impact Analysis

Historically, system decisions for requirements, designs, tests, interfaces, allocations, and others are maintained in various sources, including documents, spreadsheets, and various domain-specific design tools. This makes systems documentation difficult to maintain and understand. MBSE takes a holistic system approach to manage system information and data relationships, treating all information as a model. Figure 3 shows a generic structure linking information from multiple types of models.

Figure 3. Linking cross-domain models
Figure 3. Linking cross-domain models

Traceability helps to quickly and reliably understand the impact of changes to the system or the impact of a change at the domain level on other parts of the system and requirements. For example, Agile Teams and System Architects use model information to support the Epic review process. Traceability also provides objective evidence to address many regulatory and contractual compliance concerns. And traceability can integrate digital twins into a digital thread to provide connectivity across the systems lifecycle.

A Lean, continuous-change environment amplifies the need for related models. While manual solutions to manage related information for coverage and compliance may suffice in a phase-gate process, they will be quickly overwhelmed in an Agile environment that encourages frequent and constant change.

Generate Documentation

Many product domains require documents for regulatory compliance (for example, FAA, FDA) or contractual obligations (e.g., CDRLs in government contracting). With an MBSE approach to system development, models will contain most, if not all, of the information required for compliance and can be used to generate the objective evidence necessary. Models act as a single source of truth and ensure consistency across the many documents. Also, models can create documents targeting different stakeholders, who may have individual system perspectives, or who should only have access to view subsets of the information (e.g., Suppliers).

While all products and programs will likely require formal documents, System Engineers are encouraged to work directly with Customers and/or regulatory agencies on the minimum set sufficient to meet their obligations. The source of most, if not all, of the information resides in engineering models that can and should be used, where possible, for inspections and formal reviews.

Build Model Quality In

Due to the diversity and number of people contributing information, models can suffer a challenge: continuous changes made by many people can cause a drop in quality without proper oversight. The System Architect works with teams to define quality practices—model standards and model testing—and to ensure that they are followed.

The quality practices discussed below facilitate early learning cycles. As SAFe notes, “You can’t scale crappy code,” and the same is true for system models. Quality practices and strong version management allow engineers to confidently and frequently make model changes and contribute to the system’s intent.

Model Standards

Model standards help control quality and guide teams on how best to model. They may include:

  • What information should be captured (including information necessary for compliance)
  • Modeling notations (e.g., SysML) and parts of those notations (e.g., use case) to use or exclude
  • Where solution and subsystem elements modeling information should be placed
  • Meta-information that should be stored with different types of model elements
  • Links within the model or with other cross-discipline models
  • Common types and dimensions used across the system
  • Modeling tool properties and configuration
  • Collaboration practices and proper use of any version control system(s)

If documents are generated from the models, the document templates should be defined early, as they will influence many of these decisions. System designers need to know where to store the model elements and any metadata or links that may be used for queries, document generation, or compliance. As a best practice, create a high-level, full-system skeleton model early to validate these usage scenarios. Teams then populate the model as the system evolves.

Create Testable and Executable Models

Test-First practices help teams build quality into their products early, facilitating the continuous small changes we find in Agile software development. Test-first creates a rich suite of cases that allow developers to make changes more reliably without causing errors elsewhere in the system. Rich, automated tests are critical to building a Continuous Delivery Pipeline.

Lean practices encourage testable, executable models (when feasible) to reduce the waste associated with downstream errors. Models should be testable against whatever assessment criteria exist for the domain or discipline:

  • Mechanical models test for physical and environmental issues
  • Electrical models test for logic
  • Software models test for boundary conditions and anomalies
  • Executable system models test for system behavior

Most tools provide the ability to check models or to create scripts that can iterate across the models and identify anomalies.

Testing requirements models. Textual requirements are used in almost every system and are typically reviewed manually under the current practice. The Agile method of Behavior-Driven Development (BDD) defines automated acceptance tests for Features and Stories. These tests persist and continually validate the solution as it emerges. While helpful in testing Agile backlog items, BDD’s use has been limited at a large scale. However, the recommendation is to automate where possible and make requirements and tests the same.

Testing analysis and design models. Designs represented in models can be tested using tools that have static analyzers or ‘checkers’ that identify something that deviates from what is standard, regular, or expected. Teams may add their own rules—model organization, modeling conventions and standards, required meta-information, etc. If analyzers don’t exist, scripts can iterate over the models to look for problems in the static model.

Models can also be tested dynamically. The models from engineering disciplines have their own solutions for assessing the quality and should be leveraged as part of the testing practice.

Testing traceability. To ensure proper queries, document generation, and compliance, models must comply with the linking structure.

Document generation. While possibly redundant with the traceability scripts above, document generation may have scripts to ensure that the model is appropriately structured and that all data exists to support all document templates. With extensive models, debugging a script is often easier than debugging or updating the document template.


Learn More

[1] Box, George E. P. “Robustness in the Strategy of Scientific Model Building”, in Launer, R. L.; Wilkinson, G. N. (eds.). Robustness in Statistics. Academic Press, 1979.

[2] INCOSE Systems Engineering Handbook. SEBoK, 2023. Retrieved September 15, 2023, from https://www.sebokwiki.org/wiki/INCOSE_Systems_Engineering_Handbook

[3] Madni, Azad M., Carla C. Madni, and Scott D. Lucero. Leveraging Digital Twin Technology in Model-Based Systems Engineering. Systems, 2019. https://www.mdpi.com/2079-8954/7/1/7/pdf

[4] Hofbauer, Günter, Anita Sangl, and Sonja Engelhardt. The Digital Transformation of the Product Management Process: Conception of Digital Twin Impacts for the Different Stages. International Journal of Innovation and Economic Development, 2019. Retrieved September 15, 2023, from https://doi.org/10.18775/ijied.1849-7551-7020.2015.52.2006

Last update: 9 October 2023