Verification and Validation of Autonomous Systems: Learnings From Tech.AD Europe 2022

May 12, 2022
1 min read

Applied Intuition hosted four World Café sessions at Tech.AD Europe this year. Our team led the discussions to share challenges and jointly gain new perspectives on verification and validation (V&V) and its importance for autonomy development (Figure 1).

Figure 1: Applied Intuition hosted several World Café sessions at Tech.AD Europe.

Autonomy programs face several challenges in ensuring the safety of autonomous systems. Our World Café sessions focused on the following key topics and takeaways:

Tech.AD Takeaways

Practice V&V continuously and iteratively throughout testing and development

Autonomy programs need to validate their safety claims continually, as catching bugs in the late stages of development will lead to costly redesigns and cause delays in public launches. Furthermore, the V-model approach from traditional automotive development assumes a prior understanding of the operational design domain’s (ODD’s) full complexity, which is unrealistic.

Since autonomy programs learn critical information about the ODD and its complexity throughout the development, testing, and operation of the autonomous system, teams should combine the traditional V-model with an agile approach. Rather than one large iteration of the V-model, organizations can conduct several smaller iterations of the V-model. This way, teams can continuously use new information learned from virtual and real-world testing to redefine original system requirements and ODD definitions. 

Author a safety case that is more rigorous than existing regulatory guidelines 

Automated driving and advanced driver-assistance systems are regulated globally by frameworks still under development. As these frameworks are relatively lightweight, simply following existing regulatory guidelines will be insufficient to achieve rigorous safety. For example, meeting European New Car Assessment Programme (Euro NCAP) guidelines should be the baseline for testing requirements rather than being considered exhaustive.

Each autonomy program needs to publish its safety case—an evidence-backed, structured argument used to justify that the autonomous system in question can operate safely in its ODD. The safety case can leverage a mix of local and regional laws, such as the United Nations (UN) regulation No. 157 on automated lane-keeping systems, safety assessment programs, such as Euro NCAP, and standards, such as ISO 26262, ISO 21448, and UL 4600. Still, it is the responsibility of the autonomy program to provide substantial evidence for each of its safety argumentation claims. Authoring requirements and demonstrating that they are sufficient, non-contradictory, and fully covered is a key step in building a safety case.

Define requirements and build out a sufficient number of scenarios to test and define success

The task of defining a comprehensive list of non-contradictory requirements is one of the most challenging aspects of autonomy validation. During our World Café sessions, many stakeholders expressed that it is difficult to understand when an autonomy program has done enough testing to verify its requirements and the specific design domain. This is why requirements traceability is critical to measuring requirements coverage. Programs should define requirements with pre-defined syntax rather than free-form text. This allows teams to better analyze requirements for contradictions and repetition. By defining a classification taxonomy for a specific ODD and labeling scenarios with that taxonomy, autonomy programs can better track how well their current scenario library covers an ODD and identify missing scenarios. 

Autonomy programs often find it difficult to determine which scenarios they should create. Subsequently, creating those scenarios requires domain expertise and heavy engineering resources. Teams can leverage external or communal scenario databases to facilitate the scenario creation process. These tools help ease the burden of building out an entire scenario library from scratch. Teams can then adjust the scenarios to fit an autonomy program’s unique ODD and use cases. By building out a sufficient volume of scenarios to fully cover their program requirements, teams can better understand their autonomy system’s safety performance.

Applied’s Approach

Autonomy programs worldwide trust Validation Toolset* to build their safety case and accelerate and manage each step of their V&V lifecycle. They can also accelerate autonomy development with Applied Test Suites—pre-defined suites of scenarios and evaluation criteria (Figure 2). Applied Test Suites include thousands of scenarios across different ODDs (highway, urban) and regulatory standards (UNECE ALKS, Euro NCAP, NHTSA) to ensure test validity and proper evaluation.

Figure 2: Validation Toolset provides AV programs with continuous validation and analysis workflows (left). Applied Test Suites help verify and validate ODDs such as dense urban situations (right).

Contact our team to learn more about Applied’s end-to-end verification and validation platform and how our products can help demonstrate the safety of autonomous systems.

*Note: Validation Toolset was previously known as Basis.