Interpretable Safety Validation for Autonomous Vehicles
- URL: http://arxiv.org/abs/2004.06805v2
- Date: Fri, 26 Jun 2020 15:29:46 GMT
- Title: Interpretable Safety Validation for Autonomous Vehicles
- Authors: Anthony Corso and Mykel J. Kochenderfer
- Abstract summary: This work describes an approach for finding interpretable failures of an autonomous system.
The failures are described by signal temporal logic expressions that can be understood by a human.
- Score: 44.44006029119672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An open problem for autonomous driving is how to validate the safety of an
autonomous vehicle in simulation. Automated testing procedures can find
failures of an autonomous system but these failures may be difficult to
interpret due to their high dimensionality and may be so unlikely as to not be
important. This work describes an approach for finding interpretable failures
of an autonomous system. The failures are described by signal temporal logic
expressions that can be understood by a human, and are optimized to produce
failures that have high likelihood. Our methodology is demonstrated for the
safety validation of an autonomous vehicle in the context of an unprotected
left turn and a crosswalk with a pedestrian. Compared to a baseline importance
sampling approach, our methodology finds more failures with higher likelihood
while retaining interpretability.
Related papers
- Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Rigorous Simulation-based Testing for Autonomous Driving Systems -- Targeting the Achilles' Heel of Four Open Autopilots [6.229766691427486]
We propose a rigorous test method based on breaking down scenarios into simple ones.
We generate test cases for critical configurations that place the vehicle under test in critical situations.
Test cases reveal major defects in Apollo, Autoware, and the Carla and LGSVL autopilots.
arXiv Detail & Related papers (2024-05-27T08:06:21Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Automatic AI controller that can drive with confidence: steering vehicle with uncertainty knowledge [3.131134048419781]
This research focuses on the development of a vehicle's lateral control system using a machine learning framework.
We employ a Bayesian Neural Network (BNN), a probabilistic learning model, to address uncertainty quantification.
By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
arXiv Detail & Related papers (2024-04-24T23:22:37Z) - Self-Aware Trajectory Prediction for Safe Autonomous Driving [9.868681330733764]
Trajectory prediction is one of the key components of the autonomous driving software stack.
In this paper, a self-aware trajectory prediction method is proposed.
The proposed method performed well in terms of self-awareness, memory footprint, and real-time performance.
arXiv Detail & Related papers (2023-05-16T03:53:23Z) - Adaptive Failure Search Using Critical States from Domain Experts [9.93890332477992]
Failure search may be done through logging substantial vehicle miles in either simulation or real world testing.
AST is one such method that poses the problem of failure search as a Markov decision process.
We show that the incorporation of critical states into the AST framework generates failure scenarios with increased safety violations.
arXiv Detail & Related papers (2023-04-01T18:14:41Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - Evaluation of Pedestrian Safety in a High-Fidelity Simulation Environment Framework [21.456269382916062]
This paper proposes a pedestrian safety evaluation method for autonomous driving.
We construct a high-fidelity simulation framework embedded with pedestrian safety-critical characteristics.
The proposed simulation method and framework can be used to access different autonomous driving algorithms.
arXiv Detail & Related papers (2022-10-17T03:53:50Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.