Technology Preview: Scenario-based Virtual Validation for ADAS/AD

Technology Preview: Scenario-based Virtual Validation for ADAS/AD

Hans J. Holberg<br>Board Member<br>Business Development Autonomous Driving<br>BTC Embedded Systems AG
Hans J. Holberg
Board Member
Business Development Autonomous Driving
BTC Embedded Systems AG

Over the past years, the automotive industry has seen a tremendous amount of innovation and progress in the area of ADAS/AD and towards the goal of Level-4 and Level-5 Autonomous Driving. But there are still unsolved challenges in particular in the area of the system-level validation:

  1. The needed millions of scenarios cannot be created manually
  2. There are no metrics and methods to measure how complete the testing is
  3. There is no clear strategy for answering the challenge of homologation

It turns out, that at BTC Embedded Systems we’ve already solved comparable challenges in the area of „traditional“ embedded software development. We pioneered topics like model-checking, automatic test generation, formal requirement specification and automatic requirements observation, which today are state-of-the-art methods allowing engineers all around the world to cope with the growing system complexity and fulfill the ISO 26262 standard.

We are now applying the same core technologies to the system level verification of ADAS and AD systems. Based on an intuitive graphical language for defining abstract scenarios, we’ll use a combination of Model Checking and AI to intelligently generate an optimized set of concrete scenarios that are executed in a virtual verification environment. An automatic coverage and verdict analysis allows measuring the completeness of the set of scenarios, creating the needed argumentation for the homologation process.

How can I create the needed millions of scenarios?

In the automotive industry, it is widely accepted that the validation of autonomous vehicles needs a huge amount of scenarios which can only be tested in a virtual environment. But it is also obvious that this amount of scenarios can’t be created manually.

Recorded real-world scenarios can help, but recording the needed amount of scenarios is economically infeasible. Therefore, attempts are being made to introduce a high level language which allows to describe classes of scenarios. The problem is, that they are usually text based and therefore hard to use and hard to understand.

To solve this, BTC introduces an intuitive and graphical high-level language to describe abstract traffic scenarios. The language describes an abstract scenario in multiple phases, each being modeled in a graphical way. A so-called pre-simulation allows to visualize the dynamic behavior of the scenario to ensure that the scenario is feasible. These abstract scenarios are the basis for all subsequent steps in the validation process including automatic test generation and automatic scenario observation.

Despite being a proprietary language, BTC is aiming for full compatibility with the OPENScenario Standard and is therefore actively participating the in corresponding ASAM groups.

How can I generate the right test cases?

In order to get from an abstract high level scenario to concrete simulateable test cases, we will have to define values for a lot of parameters. One important aspect is the coverage of the ODD (Operational Design Domain), which describes the specific operating conditions in which the vehicle is supposed to be operated, for example road types, speed ranges or even weather conditions. Due to the almost infinite amount of possible parameter sets, we won’t be able to test all possible combinations. On the other hand, variation strategies like Fuzzing which are based on random algorithms will most likely not be able to cover the interesting and safety critical corner cases.

BTC addresses this problem in a two-step approach. The first stage separates parameter ranges into manageable parts while obeying a given variation strategy such as probabilistic distributions, while the second stage explores these manageable parts using AI technology to iterate SUT simulation within the remaining parameter limits of those parts to detect SUT-specific weaknesses. The outcome of this Weakness Detection step is either a test case leading to a critical or unwanted behavior of the SUT, or absence of weaknesses is reported in probabilistic terms.

This will lead to a lower amount of test cases for situations which are less critical (for example because all traffic participants are far away from each other) and therefore ensure that the total number of test cases does not explode.

Will my simulation scenarios really behave the way I expect them to?

In case the behavior of all traffic participants is pre-defined, the simulation will most likely not behave as expected. This is because such a scenario contains assumed or expected trajectories for both the SUT and other traffic, such as other vehicles, pedestrians, bikes, etc. One example is the validation of a highly autonomous system, in which the trajectory of the SUT can’t be controlled by the test case. A mismatch of the expected and the actual trajectory might obviously require to readjust the surrounding traffic to keep the coordinated system stable and in line with the intended behavior.

We address this problem with an approach we call Reactive Traffic Control (RTC). The RTC is an intelligent piece of software, which is executed by the Simulator during the test execution and can adapt the behavior of the traffic to the behavior of the ego vehicle, therefore trying to ensure that the phases defined in the abstract scenario definition are actually all present.

How can I find out if my tests are passed or failed?

In a traditional requirements-based testing, we typically check the I/O behavior of the SUT against expected values. In contrast to this, the scenario-based testing of ADAS/AD applications requires a more sophisticated approach to monitor the traffic situation and the SUT behavior regarding traffic rules, safety requirements or quality-of-service rules (for example a certain goal for fuel economy). Judging if a test is passed or failed requires to combine these rules together, for example, to decide safe and liable behavior.

To solve this, BTC provides a dedicated graphical language for an intuitive specification of the needed rules. This formal language allows the automatic generation of corresponding observers which are used to automatically analyze the simulation results.

Did I test enough to enable homologation?

In order to achieve a homologation for an autonomous vehicle, we need to have the right metrics to evaluate the completeness of the scenario-based validation strategy. The amount of kilometers or miles that have been simulated (or driven in reality) is not a useful or economically feasible metric.

Instead, a feasible reasoning for the required confidence into an ADAS/AD system should be based on a mix of the metrics described above, which include Scenario-Coverage, Coverage of the ODD and Weakness Detection.

If you are interested to learn more, don’t hesitate to contact me.

Hans J. Holberg

holberg@btc-es.de
Connect with me on LinkedIn