Overcoming Rigorous Verification Challenges in Autonomous Systems Development

February 1, 2021
1 min read

Safe and reliable operation is a key requirement for autonomous systems development. It is thus desirable to employ methods that directly contribute to reliability and robustness for the hardware and software subsystems that are designed and implemented to safely control the vehicle motion. Approaches based on functional safety and formal methods are particularly attractive because of the rigor of the underlying mathematical approaches and modeling techniques.

However, the sheer complexity and variability of the environment, the vehicle systems, and the interplay of components in an autonomous system quickly lead to practical and fundamental limitations. These relate both to the curse of dimensionality and the unbounded growth of model behavior bounds for the required types of equations. It would be ideal to get strong guarantees from methods such as type systems, theorem proving, or reachability analysis. This requires a formalization of operational design domains, system designs, and functional requirements. While this is feasible in principle, it requires significant simplifications and approximations to remain tractable in practice.

Additionally, building autonomous vehicles (AVs) for real-world applications presents a wide range of other considerations: How should the organization effectively define and track the work of large engineering teams that may practice agile methods? How should teams effectively use the real-world tests and drive logs for system development? The problem is compounded when it is difficult to collect ground-truth information. To complicate matters, the software versions and hardware configurations are constantly changing due to insights from research and field operational tests.

In this blog, the Applied Intuition team illustrates how quickly an AV-type system model could become intractable from a formal method point of view and offers insights into what could be done regardless to achieve benefits for developing and productionizing autonomous systems.

Automatic Emergency Braking

Consider an Automated Emergency Braking (AEB) system on a vehicle driving down a straight single-lane road. As bicycles swerve into the road or another car cuts in, the system must detect the situation and come to a full stop in time to avoid a collision (figure 1). Such a system including sensing, decision making, and control and the test scenario could be modeled as a hybrid system. This allows risk and hazard assessment using a framework such as ISO 26262 (read more on ISO 26262 and simulation in this post).

Figure 1: Using simulation to test the AEB system as bicycles cross the street.

A good example is found in Duracz et al. (2015)’s case study. Their rigorous simulation computes validity bounds for all relevant quantities produced by the underlying solvers. In this case, the problem is made tractable by (a) using a one-dimensional world model with second-order linear dynamics, (b) each automaton having only a small number of states, and (c) state transitions firing only rarely and not leading to loops.

The approximations used for rigorous simulation here are reasonable for design-time verification. For more complex and realistic cases (especially when considering the fact that vehicles and bicycles could also steer), it becomes harder to compute bounds. At some point, it becomes impractical to require hard guarantees on system behavior because the "buffer" around the nominal system behavior ends up too large to be informative.

Jack-Knife Avoidance

Another example is the avoidance of jack-knifing in trucks. For example, Dunn et al. (2003) analyzed break-in-turn scenarios for AEB on heavy trucks. A hybrid systems model was formulated for vehicle dynamics and jack-knife stability was analyzed under several combinations of load, speed, and ABS behaviors. This type of study illustrates the use of simulation in assessing how to achieve proposed NHTSA regulations. While it uses fewer approximations, the main drawback compared to the previous AEB study is the lack of bounds on system performance. However, jack-knifing inherently involves rotations: to capture the ability of vehicles to change their headings, it is necessary to incorporate the non-holonomic system equations into the simulation. This makes it significantly harder to determine bounds of states in the simulation.

One way to alleviate this in practice is to resort to a sampling-based approach that looks at system behavior traces across many variations of a scenario. While it is not practical to achieve formal guarantees, statistical methods can be used to assess risk and severity.

Figure 2: Assessing the truck's ability to avoid jack-knifing in simulation.

From ADAS to AV

It seems safe to claim that the current state of the art in formal methods allows the verification of specific functional aspects for a subset of systems that can be classified as ADAS or Level 2. To raise the level of autonomy, it quickly becomes necessary to involve much more complex perception, planning, and actuation systems. For these systems, formal methods can already play a role in their approach and implementation. For instance, Pek et al. (2019) propose an online verification framework that can be used for trajectory planning.

But to meet verification and validation requirements for getting autonomous systems to the market, more practically tractable methods are required for the overall system. A mix of methods is currently being brought to bear. This is indicated by efforts to change the standardization to consider "intended functionality" (SOTIF) and the heavy reliance of drawn-out and extensive real-world testing for autonomous vehicle deployment programs (see autonomous vehicle regulations in California and Arizona).

While it is tempting to use formal methods "for everything", they are best viewed as one of the tools in the entire collection of approaches for assessing safety and functionality. In the AV development and testing process, formal methods don't scale to the realities of self-driving. There are too many axes along which complexity grows and too many places where highly non-linear effects dominate. For individual components or highly abstracted system views (e.g. for ODD or requirement planning), rigorous methods are valuable tools. For the overall goal of deploying safe and reliable systems, it is crucial to employ effective workflows that can incorporate various methods rather than making each individual method as general as possible.

Applied Intuition’s Approach

The Applied Intuition team takes practical approaches to solving problems and builds these into tools for autonomous driving system development. We think that trade-offs have to be made between absolute certainties and pragmatic solutions that could be implemented today. As an example, to measure the autonomous driving system’s safety readiness, we recommend quantifying the system’s performance in relation to an ODD. This helps tackle the combinatorial explosion of variants in scenarios and requirements that need to be handled (see this article for detail). It is of utmost importance that all test results are carefully captured. Our tools are being used by engineers worldwide to quickly identify which next set of scenarios they should be testing to improve system safety.

The Applied Intuition team also combines bottom-up, data-driven approaches with top-down, ODD/requirement-driven approaches. For example, we enable development teams to create test scenarios directly from real-world drive logs and evaluate the system’s performance to the original requirements as shown in the publications with May Mobility and Kodiak

If you’re interested in learning more about practical tools and solutions that could be implemented today, get in touch with our engineering team