Large-Scale Simulation and Validation With CARLA

February 8, 2022

CARLA simulator is a popular open-source simulation tool for advanced driver-assistance systems (ADAS) and autonomous vehicles (AVs). It allows development teams to run virtual tests and evaluate their prediction, planning, and control stacks. As CARLA is an open-source tool, it is flexible and readily available for anyone to use. But core simulation tools alone do not include all of the features necessary to successfully deploy safe AV systems. In addition to running individual virtual tests, AV teams need to scale their simulations to thousands per day to avoid regressions in their stack. They also need to validate the AV stack against a multitude of system requirements and safety protocols. This is where Applied Intuition's products can help.

CARLA users can leverage Applied Intuition's continuous integration (CI) and verification & validation (V&V) tools without using Applied Intuition's core simulator Object Sim*. Applied Intuition tools integrate with other simulators like CARLA and complement their functionality, allowing AV teams to scale and validate their AV development efficiently and successfully.

This blog post will explore how AV teams can use Applied Intuition's CI and V&V tools, Cloud Engine* and Validation Toolset*, together with CARLA to run large-scale simulations and validate their AV stack. The blog will outline a workflow that allows teams to manage their entire simulation and validation life cycle (Figure 1).

Figure 1: Workflow for simulation and validation with CARLA, Cloud Engine, and Basis

Managing Requirements and Scenarios

To successfully verify and validate an AV stack, development teams must ensure that it meets specific safety requirements. Teams may use simulation tools that run hundreds of scenarios to assess and verify system safety. Still, they also need a solution to analyze performance and trace results back to each safety requirement. 

With Applied Intuition's V&V tool Validation Toolset, AV development teams can create and execute scenarios in CARLA, analyze test coverage and performance, and trace results back to safety requirements in a unified workflow. Validation Toolset supports the OpenSCENARIO (OSC) V1.1 and the OSC V2.0 open standards for scenario editing and management. This way, teams can create and edit OSC scenarios at scale in Validation Toolset (Figures 2a, b) and then execute those scenarios in CARLA (Figure 2c).

Figure 2a: Validation Toolset graphical user interface (GUI) to create and edit OSC V1.1 scenarios
Figure 2b: Validation Toolset GUI to create and edit OSC V2.0 scenarios
Figure 2c: Validation Toolset GUI to execute scenarios in CARLA

Running Large-Scale Simulations in the Cloud

AV teams that experiment with or develop new features might find it beneficial to run individual simulations locally. When entire teams use simulation to validate an AV stack, however, they don’t only need to run one-off tests but hundreds or thousands of simulations on a single merge request to avoid regressions. Teams also need a way to scale these simulations with high performance and low latencies to preserve their developer velocity. This can only be achieved by running simulations in the cloud.

AV teams can use CARLA together with Applied Intuition's CI tool Cloud Engine to run large-scale simulations. Cloud Engine provides test automations that link easily to any AV team’s CI system. This way, Cloud Engine can kick off simulations automatically whenever a code change occurs or run simulations at recurring intervals.

Even if teams aren’t using Object Sim, Cloud Engine still provides a highly scalable Kubernetes backend. Its frontend is optimized to make rich data available to the user immediately. This way, teams can run simulations in CARLA and then play back results, view logs and plots, and analyze observer rules directly in Cloud Engine (Figure 3).

Analyzing Performance and Coverage

When development teams work on new features or improvements to their current AV stack, they need to understand its overall performance (i.e., how well the software is performing on simulation tests) and how this performance has progressed or regressed compared to previous versions. To decide which simulations to run next, teams also need to measure their AV stack’s coverage (i.e., how much of the possible scenario space is already accounted for).

Figure 3: Cloud Engine shows the CI results of a CARLA simulation, including a playback UI, logs, plots, and red markers to indicate problematic incidents

Based on CARLA simulation results, Validation Toolset allows AV teams to analyze their stack’s performance and coverage (Figure 4). Teams can apply the same evaluation rules to scenarios and extract important safety, comfort, and performance metrics for rigorous analysis. They can then combine this performance and coverage analytics information with results from real-world drives and track testing to build a comprehensive safety case.

Figure 4: Validation Toolset GUI to analyze AV stack performance based on CARLA simulation results

Conclusion

The proper workflows and tools can help AV teams successfully run large-scale simulations and validate their entire stack. Cloud Engine and Validation Toolset support these workflows while easily integrating with CARLA simulator. This way, teams can save hundreds of engineering hours otherwise spent catching regressions or waiting for simulations to finish running.

Schedule a product demo with our engineering team if you use CARLA or another simulation tool and would like to learn more about integrating your workflow with Cloud Engine or Validation Toolset.

*Note: Object Sim was previously known as Simian, Cloud Engine was previously known as Orbis, and Validation Toolset was previously known as Basis.