A Primer on ASAM OpenSCENARIO V2.0 and Its Importance for ADAS and AV Development

Abstract, logical, and concrete scenarios play an essential role in testing, validating, and certifying the safety of automated driving systems. OpenSCENARIO V2.0 will make it easier to create and transfer abstract scenarios between tools.
Sep 27, 2021

OpenSCENARIO V2.0 is slated for release in November 2021. The Association for Standardization of Automation and Measuring Systems (ASAM) is developing OpenSCENARIO V2.0 as a standard for describing scenarios with dynamic content in advanced driver-assistance systems (ADAS) and autonomous vehicle (AV) development. The following blog post explains the difference between abstract, logical, and concrete scenarios and the importance of OpenSCENARIO V2.0 for AV and ADAS development. It also describes how Applied Intuition’s verification and validation (V&V) tool Basis will allow simulation operations, testing, and validation teams to edit OpenSCENARIO V2.0 abstract scenarios as well as create logical scenarios.

Introduction

Abstract, logical, and concrete scenarios

A scenario is the dynamic content of a simulation in ADAS and AV testing and development. Scenarios play an essential role in testing, validating, and certifying the safety of automated driving systems. To verify a functional requirement, simulation operations, testing, and validation teams need to test it against what could be an enormously large parameter space. By restricting this parameter space to the autonomous system’s operational design domain (ODD), teams can define the full test space of the system. The PEGASUS method recommends using different abstraction levels of scenarios to sufficiently test the autonomous system in the full test space.

For example, a functional requirement might state that an autonomous vehicle (the ego) should successfully perform an unprotected left turn despite oncoming traffic. In order to verify this requirement, teams start off by creating a number of abstract scenarios. An abstract scenario is a human and machine readable, high-level description of a scenario. For example, an abstract scenario for an unprotected left turn might state that the ego is at a straight intersection with 1-3 lanes, is located in the left-most lane, and that its velocity could be faster or slower than surrounding traffic (Figure 1).

Figure 1: The PEGASUS method recommends using different abstraction levels of scenarios to verify a functional requirement.

Next, simulation operations, testing, and validation teams define the complete test space of relevant scenario parameters using logical scenarios, which describe parameter distributions. For example, a logical scenario might state that the vehicle’s possible speed for making an unprotected left turn is anywhere between 10-20 m/s, that the traffic speed is also between 10-20 m/s, and that the gap between the ego and the oncoming vehicle is between 20-40 m.

Finally, simulation operations, testing, and validation teams need to derive concrete scenarios from each logical scenario. A concrete scenario is an executable test within defined parameter values that teams can simulate (Figure 2) to determine whether it passes or fails the test criteria. For example, a concrete scenario might state that the ego and traffic speed are both 13 m/s and the gap between the ego and the oncoming vehicle is 30 m. In order to be useful for requirement verification, a concrete scenario needs to be deterministic, i.e., produce the same result every time the simulation runs.

Figure 2: Simulation of the ego performing an unprotected left turn

Benefits and drawbacks of creating abstract scenarios vs. logical scenarios

The approach mentioned above comes with benefits and drawbacks. Creating abstract scenarios and deriving logical and concrete scenarios from those abstract scenarios is fast and scalable but often noisy. A different approach is to skip the creation of abstract scenarios. Here, simulation operations, testing, or validation teams directly create logical scenarios and derive concrete scenarios from them. This approach allows for more precision and detail, but it doesn’t scale as easily.

Benefits of creating abstract scenarios:

  • Speed: Testing a requirement in a full test space is time and resource intensive. By creating high-level abstract scenarios and letting verification and validation (V&V) tools automatically generate logical and concrete scenarios from those abstract scenarios, ADAS and AV companies can speed up the requirement verification process.
  • Scalability: The creation of high-level abstract scenarios allows simulation operations, testing, and validation teams to easily cover the entire parameter space for a specific requirement and to repeat the process for a multitude of other requirements.

Benefits of creating logical scenarios directly:

  • Precision: When creating logical scenarios, simulation operations, testing, and validation teams are able to define an exact parameter space and make fine-grained adjustments. This high level of precision helps ensure that the created scenarios are deterministic. It also allows teams to create variations of specific scenarios that they want to test or recreate from the real world.
  • Control: By creating logical scenarios directly, teams have greater control around excluding illegal or unrealistic scenarios, i.e., scenarios that are impossible or improbable to occur in the real world. They can also avoid including uninteresting scenario combinations (Figure 3). This is beneficial since running the entire parameter space generated by an abstract scenario is computationally inefficient, especially if a lot of those scenarios are noisy and uninteresting combinations.
Figure 3: The creation of logical scenarios allows teams to exclude illegal, unrealistic, and uninteresting scenario combinations and only test combinations that are relevant for their ODD.


Successful ADAS and AV companies use each of these two approaches when appropriate. We will explore Applied Intuition’s take on abstract vs. logical scenario creation later in this blog post (see “Applied’s Approach”).

Even with the right approach at hand, one of the most challenging parts of creating and testing scenarios remains. Many ADAS and AV companies use different tools for different parts of the requirement verification workflow. For example, systems engineers teams might use one tool to define functional requirements, scenario creators might use a separate tool to write abstract or logical scenarios, and test engineers might use a third tool to simulate concrete scenarios. Transferring and translating scenarios between different tools can be very difficult and time-consuming. ASAM’s OpenSCENARIO standard helps alleviate this problem.

Limitations of ASAM OpenSCENARIO V1.0

OpenSCENARIO V1.0 was released in March 2020. It is a standard that allows simulation operations, testing, and validation teams to describe concrete scenarios with dynamic content in a standardized way using an XML format. OpenSCENARIO V1.0 has been useful for those teams because it is vendor-agnostic and facilitates the transfer of concrete scenarios between tools.

However, as OpenSCENARIO V1.0 only allows ADAS and AV companies to describe concrete scenarios but not abstract or logical scenarios, its benefit has been limited to lower-level autonomy use cases. Simulation operations, testing, and validation teams can use OpenSCENARIO V1.0 to create one-off concrete scenarios from scratch and execute them to test whether they pass or fail a functional requirement. But to expand their test coverage and build a safety case for their automated driving systems, they would need to manually create all possible concrete scenarios covering a full test space. This is time-consuming, error-prone, and in practice impossible. In the end, the only feasible solution is to either programmatically derive concrete scenarios from abstract scenarios or to define scenarios at the logical level. As OpenSCENARIO V1.0 doesn’t provide a standard for abstract and logical scenarios, companies then run into the same cross-tool transfer problem that we described above.

Another draw-back of OpenSCENARIO V1.0 is that it follows an XML format, which is not optimized for human readability (Figure 4a). As a result, many simulation software providers develop their own custom scenario languages that are easier to read (Figure 4b).

Figure 4a:  Description of the initial speed of an ego vehicle using OpenSCENARIO V1.0 (XML format)
Figure 4b: The same description as Figure 4a but using Applied’s Simian scenario language (YAML format)

ASAM OpenSCENARIO V2.0: High Levels of Abstraction and Inter-Tool Compatibility

OpenSCENARIO V2.0 is slated for release in November 2021. It will exist in parallel with OpenSCENARIO V1.X, with plans to merge the two at some point in the future. OpenSCENARIO V2.0 aims to alleviate the limits that OpenSCENARIO V1.0 presents in terms of inter-tool compatibility and readability. 

To improve inter-tool compatibility, OpenSCENARIO V2.0 will support abstract, logical, and concrete scenarios. This will make it easier to transfer scenarios of different abstraction levels between tools. It will also allow OpenSCENARIO V2.0 to cover a wider range of AV development use cases. Simulation operations, testing, and validation teams will be able to verify functional requirements by defining abstract scenarios and deriving logical and concrete scenarios from those abstract scenarios. All abstraction levels will use the same standardized language and will be easily transferable between tools.

OpenSCENARIO V2.0 will provide a Domain Specific Language (DSL) to make scenarios easier to read. A DSL is a programming language that has limited expressiveness and is focused on a particular domain. OpenSCENARIO V2.0 will thus use a language that is specifically designed and optimized for describing scenarios, rather than needing to follow XML format like its predecessor.

Applied’s Approach

Once OpenSCENARIO V2.0 is released, Applied’s end-to-end V&V tool Basis will allow simulation operations, testing, and validation teams to create, view, and edit OpenSCENARIO V2.0 abstract scenarios. This will support simulation operations, testing, and validation teams that are planning to create OpenSCENARIO V2.0 abstract scenarios or are looking for a way to transfer abstract scenarios between tools. By supporting OpenSCENARIO V2.0, Basis will be compatible with other tools in the industry, such as simulators, that support the OpenSCENARIO V2.0 standard.

Alongside supporting OpenSCENARIO V2.0 abstract scenarios, Basis will allow simulation operations, testing, and validation teams to edit auto-generated logical and concrete scenarios using Applied’s Simian scenario language. It will also continue to enable the creation of logical and concrete scenarios from scratch without creating abstract scenarios beforehand.

Here is the workflow for creating OpenSCENARIO V2.0 abstract scenarios in Basis (Figure 5):

Figure 5: Basis UI for OpenSCENARIO V2.0 abstract scenarios


  • Systems engineers are able to define functional requirements in Basis.
  • From a functional requirement, scenario creators can create or generate OpenSCENARIO V2.0 abstract scenarios.
  • From these abstract scenarios, Basis automatically generates logical and concrete scenarios. Basis continues to support the Simian scenario language for logical and concrete scenarios.
  • Test engineers can easily edit auto-generated concrete scenarios using the Simian scenario language.
  • Basis links the auto-generated concrete scenarios to functional requirements for end-to-end traceability.
  • Applied’s verification packs complement the process to easily cover a range of scenarios that are common or relevant to regulatory requirements and to validate the autonomous system in a specific ODD.

Reach out to our engineering team to learn more about Applied’s plans for supporting OpenSCENARIO V2.0.