Applied Intuition’s V&V Handbook: Analytics and Reporting (Part 3)

August 2, 2022

Applied Intuition recently published its verification and validation (V&V) handbook. Our three-part blog post series highlights different aspects of the handbook. Part 1 explained what V&V efforts typically look like at different stages of advanced driver-assistance systems (ADAS) and automated driving systems (ADS) development. Part 2 covered industry practices around scenario creation and test execution. This third and last part of our series discusses how autonomy programs typically measure coverage and analyze their system’s performance depending on their development stage. Keep reading to learn more about this topic, or download our full V&V handbook below.

Read Applied Intuition’s V&V handbook
Oops! Something went wrong while submitting the form. Please try again.

Analytics and reporting are a part of every autonomy program’s V&V efforts. All programs should define and measure coverage (i.e., what the autonomous system has been tested on so far). They must also analyze their system’s performance to inform future feature development and build a safety case.

Defining and Measuring Coverage

Coverage from early to late-stage development

Autonomy programs should formally measure coverage to demonstrate their comprehensive V&V efforts. Teams can track the following information as it relates to coverage: 1) Known and unknown information (i.e., scenarios that you know—or do not know—should be tested); 2) covered and uncovered information (i.e., scenarios that have—or have not—already been tested). Typically, early-stage autonomy programs focus more on known information, while later-stage programs also focus on unknown information (Figure 1).

Figure 1: The information (known/unknown and covered/uncovered) autonomy programs should measure by V&V stage

Measuring coverage

To measure coverage, autonomy programs first need to define their operational design domain (ODD). They can then calculate coverage as the ratio between: 1) Known and covered information; 2) the total space of possible situations the system might encounter.

Autonomy programs can measure coverage at different levels of granularity—from the scenario and requirements levels up to the entire ODD. Early-stage autonomy programs might measure coverage as a simple count of the number of tests for each scenario category. Later-stage programs usually move on to calculating a comprehensive, statistical measure of coverage (Figure 2).

Figure 2: Recommended ways of measuring coverage by V&V stage

Importance of measuring coverage

For early-stage autonomy programs, the primary goal is feature development. The role of measuring coverage is to help identify and fill potential feature gaps. Thus, any coverage metric must help answer the following questions:

  • What are the most important features to develop?
  • Which features need the most work?
  • Which features are not being tested enough?

As an autonomy program reaches mid-stage development, its focus shifts toward creating a safety case. In this stage, the role of coverage shifts away from driving feature development and moves toward proving maturity and safety. Thus, the primary questions for coverage metrics become:

  • What additional work is required for a feature to be considered mature?
  • Are there any situations where specific feature behavior is unknown?

Late-stage autonomy programs focus on the uncovered unknowns. Here, the primary questions for coverage metrics are: 

  • How safe is the system given the known ODD information space?
  • How safe is the system in situations it has not yet encountered?

The “Defining and measuring coverage” section in our V&V handbook lays out specific steps autonomy programs can take to define coverage metrics and set up coverage analysis workflows. The section also outlines six benefits autonomy programs can achieve by defining and increasing coverage. These benefits include easier prioritization of features, better performance on rare subsets of the ODD, optimization of data collection and scenario creation, and easier identification of missing scenarios that still need to be tested. 

Analyzing Performance

The goal of performance analysis is to understand the conditions that an autonomous system can and cannot handle safely. Performance analysis also helps teams measure progressions and regressions from the previous software release. The following table shows common performance analysis processes for early-, mid-, and late-stage autonomy programs (Figure 3). All teams track key performance indicators (KPIs) and safety performance indicators (SPIs). Formal A/B testing becomes a point of emphasis for mid- and late-stage programs.

Figure 3: Performance analysis processes by V&V stage

The “Analyzing performance” section in our handbook lays out specific steps that autonomy programs can take to conduct performance analysis throughout their development efforts. It also states three key benefits programs can achieve by measuring and assessing their system’s performance. These benefits include objective feature prioritization, faster development velocity, and contributions to key parts of the autonomy program’s safety case.


Coverage and performance analysis are important in every autonomy program’s V&V efforts. Autonomy programs can achieve several benefits by defining coverage according to their development stage and gradually moving to a comprehensive, statistical measurement of coverage. These benefits include prioritizing features more easily, achieving better performance on rare subsets of the ODD, optimizing data collection and scenario creation, and identifying missing scenarios. By evolving their performance analysis processes over time, programs can improve feature prioritization, increase development velocity, and build their safety case.

These are only some of the topics our V&V handbook covers. Download your free copy today to learn about safety framework best practices, the V&V lifecycle, requirements management & traceability, scenario creation, and test execution.

Download the full V&V handbook
Oops! Something went wrong while submitting the form. Please try again.

Contact our engineering team if you have questions about this handbook or would like to learn more about Applied’s V&V platform Basis.