Rigorous Simulation-based Testing for Autonomous Driving Systems -- Targeting the Achilles' Heel of Four Open Autopilots
- URL: http://arxiv.org/abs/2405.16914v1
- Date: Mon, 27 May 2024 08:06:21 GMT
- Title: Rigorous Simulation-based Testing for Autonomous Driving Systems -- Targeting the Achilles' Heel of Four Open Autopilots
- Authors: Changwen Li, Joseph Sifakis, Rongjie Yan, Jian Zhang,
- Abstract summary: We propose a rigorous test method based on breaking down scenarios into simple ones.
We generate test cases for critical configurations that place the vehicle under test in critical situations.
Test cases reveal major defects in Apollo, Autoware, and the Carla and LGSVL autopilots.
- Score: 6.229766691427486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation-based testing remains the main approach for validating Autonomous Driving Systems. We propose a rigorous test method based on breaking down scenarios into simple ones, taking into account the fact that autopilots make decisions according to traffic rules whose application depends on local knowledge and context. This leads us to consider the autopilot as a dynamic system receiving three different types of vistas as input, each characterizing a specific driving operation and a corresponding control policy. The test method for the considered vista types generates test cases for critical configurations that place the vehicle under test in critical situations characterized by the transition from cautious behavior to progression in order to clear an obstacle. The test cases thus generated are realistic, i.e., they determine the initial conditions from which safe control policies are possible, based on knowledge of the vehicle's dynamic characteristics. Constraint analysis identifies the most critical test cases, whose success implies the validity of less critical ones. Test coverage can therefore be greatly simplified. Critical test cases reveal major defects in Apollo, Autoware, and the Carla and LGSVL autopilots. Defects include accidents, software failures, and traffic rule violations that would be difficult to detect by random simulation, as the test cases lead to situations characterized by finely-tuned parameters of the vehicles involved, such as their relative position and speed. Our results corroborate real-life observations and confirm that autonomous driving systems still have a long way to go before offering acceptable safety guarantees.
Related papers
- Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - PAFOT: A Position-Based Approach for Finding Optimal Tests of Autonomous Vehicles [4.243926243206826]
This paper proposes PAFOT, a position-based approach testing framework.
PAFOT generates adversarial driving scenarios to expose safety violations of Automated Driving Systems.
Experiments show PAFOT can effectively generate safety-critical scenarios to crash ADSs and is able to find collisions in a short simulation time.
arXiv Detail & Related papers (2024-05-06T10:04:40Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Identifying and Explaining Safety-critical Scenarios for Autonomous
Vehicles via Key Features [5.634825161148484]
This paper uses Instance Space Analysis (ISA) to identify the significant features of test scenarios that affect their ability to reveal the unsafe behaviour of AVs.
ISA identifies the features that best differentiate safety-critical scenarios from normal driving and visualises the impact of these features on test scenario outcomes (safe/unsafe) in 2D.
To test the predictive ability of the identified features, we train five Machine Learning classifiers to classify test scenarios as safe or unsafe.
arXiv Detail & Related papers (2022-12-15T00:52:47Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Towards Automated Safety Coverage and Testing for Autonomous Vehicles
with Reinforcement Learning [0.3683202928838613]
Validation puts the autonomous vehicle system to the test in scenarios or situations that the system would likely encounter in everyday driving.
We propose using reinforcement learning (RL) to generate failure examples and unexpected traffic situations for the AV software implementation.
arXiv Detail & Related papers (2020-05-22T19:00:38Z) - Interpretable Safety Validation for Autonomous Vehicles [44.44006029119672]
This work describes an approach for finding interpretable failures of an autonomous system.
The failures are described by signal temporal logic expressions that can be understood by a human.
arXiv Detail & Related papers (2020-04-14T21:11:43Z) - Efficient statistical validation with edge cases to evaluate Highly
Automated Vehicles [6.198523595657983]
The widescale deployment of Autonomous Vehicles seems to be imminent despite many safety challenges that are yet to be resolved.
Existing standards focus on deterministic processes where the validation requires only a set of test cases that cover the requirements.
This paper presents a new approach to compute the statistical characteristics of a system's behaviour by biasing automatically generated test cases towards the worst case scenarios.
arXiv Detail & Related papers (2020-03-04T04:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.