How AV and ADAS Systems Engineers Can Use Public Road Drive Data to Verify Requirements Automatically

September 13, 2021
1 min read

As autonomous driving systems become increasingly complex, systems engineering teams need to define, verify, and validate a multitude of requirements. As it has been historically difficult to use public road drive data to verify requirements, simulation is a common approach to do so. This blog post discusses two existing approaches to verifying requirements with public road drive data and explains why engineering teams have found them challenging to use. It then introduces a new method called “scenario search” and explains how scenario search automates the error-prone and time-consuming work involved in existing approaches.

Why Use Public Road Drive Data to Verify Requirements?

Although most companies in the autonomous vehicle (AV) and automated-driver assistance systems (ADAS) industry use public road driving as one of their testing methods, they have been reluctant to use public road drive data for requirement verification. Simulation has been a more popular method to verify requirements because simulation tools make it easy to create specific scenarios, run test/fail criteria against them, and create many variations of the same scenario to ensure coverage of long-tail events.

In addition to using simulation, systems engineering teams would benefit from an easy way to use public road drive data for requirement verification. Most teams have already recorded and stored a wealth of drive data that they rarely use except to analyze critical events such as disengagements. Let’s explore two existing ways to use public road drive data for requirement verification and why these methods haven’t caught on.

Using Public Road Drive Data to Verify Requirements: Two Approaches

Human operator tagging

Human operator tagging is the approach of letting the operator know what situation they should look for during real-world public road driving and having the operator tag events live, as they encounter them during their drive. The systems engineering team then compares the tagged events with the initial requirement to see which ones pass or fail.

Figure 1: Public road drive data showing a vehicle cut-in (Credit: nuScenes)

For example, suppose a requirement states the vehicle should successfully keep a certain distance after a slower vehicle cuts in front of it. In that case, the operator will tag any situations in which another vehicle cuts into the vehicle’s lane (Figure 1).

Unfortunately, human operator tagging is only effective for requirement verification if systems engineers have defined their requirements ahead of public road driving. If human operators tag situations of interest to them without pre-defined requirements, they might miss some relevant events. As requirements for automated driving systems constantly evolve and change, human operator tagging is an unreliable approach to requirement verification. It is also prone to human error. If an operator misses a situation or tags it by accident, they aren’t able to reverse their decision or tag the event in hindsight.

Manual analysis

In addition to human operator tagging, manual analysis is another common approach to using public road drive data to verify requirements. When using manual analysis, AV and ADAS engineering teams record and collect drive data as real-world public road driving occurs. To verify a requirement, systems engineers then download one drive at a time locally and develop a programmatic script to search that drive for specific events. Once they find an instance of the desired event, engineers must manually go through its result and verify whether it passes the given requirement. Going back to our example of a vehicle cut-in, systems engineers need to develop a script that identifies events in which the vehicle is driving and a slower vehicle cuts in front of it. They then need to watch the identified events to verify whether the vehicle keeps the required distance each time.

The problem with manual analysis is that engineers often need to download, develop scripts, and check results for thousands of hours of drive logs. Manual analysis is hence a time- and resource-intensive approach.

A New Approach: Scenario Search to Use Public Road Drive Data for Automatic Requirement Verification

A new approach to using public road drive data for requirement verification is using scenario search. Scenario search is a query functionality in Applied Intuition’s drive data exploration tool Data Explorer*. It allows systems engineers to curate relevant datasets for development and find safety-critical events for triage. Scenario search also lets engineers automatically find and analyze specific events in their public road drive data to verify given requirements. When using Data Explorer’s scenario search in tandem with Applied Intuition’s verification and validation tool Validation Toolset*, teams can define and manage requirements more easily as well as track overall stack performance and progress towards validation.

How scenario search works

Figure 2: The five steps of Applied Intuition’s scenario search workflow in Data Explorer

1. Ingest drive logs

Every time a public road drive ends, Data Explorer ingests a new drive log and converts it into a stable format, which includes a semantic meaning for the vehicle’s and actors’ poses on a map. It then processes the available sensor data to categorize each event the vehicle experiences over the course of its drive. Data Explorer examines the data channels and applies a set of rules to generate scenario tags and extract scenario-specific metrics that describe different vehicle and actor behaviors. It then attaches these tags and metrics to log ticks as annotations. This makes it easy for engineers to query for a subset of events based on a given requirement.

2. Define or import requirements

The systems engineering team can either define their requirements in Validation Toolset, which directly integrates with Data Explorer, or they can formulate requirements in another tool and import them into Data Explorer. In our example of a vehicle cut-in, a requirement might state that the vehicle needs to keep a certain distance after a slower actor (i.e., another vehicle) performs a lane change into the vehicle’s lane within a certain time-to-collision (TTC) threshold.

3. Choose data sources and identify matching events

Next, Data Explorer automatically identifies matching events in the team’s drive logs. Unlike manual analysis, scenario search processes data in the cloud. This allows engineers to search for events across their entire drive log library as opposed to analyzing each drive one by one. Data Explorer needs to combine different data sources to accurately search for and annotate the right events for each requirement. Some requirements call for events that are possible to find using one data source, for example, an offline computer vision algorithm. Other requirements match events that require a combination of data sources such as the vehicle pose, vehicle localization information, HD maps, OpenSource maps, the vehicle’s perception system, and external offboard perception systems.

Figure 3: Vehicle passing by a stopped bus (Credit: nuScenes)

For example, to find cases where the vehicle passes by a stopped bus (Figure 3), Data Explorer needs to find events where another actor is tagged as a bus. Here, it is best to either use the logged data from the onboard system or to run an offline computer vision algorithm on the vehicle’s forward-facing camera.

In our example with the vehicle cut-in, Data Explorer might look at one or multiple data sources depending on the quality of the stack. If the stack is of high enough quality, the offline perception system will be sufficient to detect events in which the vehicle is driving and a slower vehicle cuts in front of it. In other cases, Data Explorer might need to combine information from the offline perception system with information about the vehicle and actor poses (in order to know the position and speed of both vehicles) and HD map information (in order to know which lanes the vehicles are in). Data Explorer then automatically filters those events down to the cases where the TTC and the actor-relative velocity are below the specified thresholds to match the requirement.

Figure 4: Data Explorer’s scenario search feature automatically identifies events that match a given requirement.

4. Process events & verify requirements

Data Explorer now automatically processes the chosen events (Figure 4) and verifies which ones pass or fail the requirement. It is able to do so by looking at the time-series data associated with the event and then applying complex boolean or sequential logic that represents the requirement’s criteria. In our example, the pass/fail criteria might state that the vehicle should brake and maintain a certain buffer distance to the actor. Data Explorer processes the chosen events according to this rule and automatically computes “pass” or “fail” for each event.

5. Act on results

After successfully verifying the given requirements using scenario search in Data Explorer, the systems engineering team can: a) triage and debug failure cases in Data Explorer, b) use positive results to build confidence in its safety case, c) look at its coverage across requirements in Validation Toolset to understand its progress towards validation, and d) generate a validation report listing all requirements and matching events in Validation Toolset. While Data Explorer’s scenario search feature helps systems engineering teams find and process individual events, Validation Toolset allows them to get an aggregate view of their stack performance and track their progress over time. Both tools alleviate the challenges we identified in existing approaches.

Conclusion

Public road drive data has historically been difficult to use for requirement verification, as existing approaches to doing so have always involved low reliability and time-consuming manual work. Scenario search is a new approach that provides a reliable, more time- and resource-efficient way to verify requirements using public road drive data. It allows systems engineers to quickly and efficiently surface failing events to specific feature owners even if the operator didn’t tag these events during their drive. This allows systems engineers to take advantage of the wealth of data that they already collect but rarely use today. By making use of existing data to verify requirements, systems engineering teams can more efficiently analyze and validate their performance across all test environments, track progress over time, and build their safety case.

Contact our engineering team for a product demo of Data Explorer or Validation Toolset.

*Note: Data Explorer was previously known as Strada and Validation Toolset was previously known as Basis.