Scalable Autonomous Vehicle Safety Validation through Dynamic
Programming and Scene Decomposition
- URL: http://arxiv.org/abs/2004.06801v2
- Date: Fri, 26 Jun 2020 15:33:24 GMT
- Title: Scalable Autonomous Vehicle Safety Validation through Dynamic
Programming and Scene Decomposition
- Authors: Anthony Corso, Ritchie Lee, Mykel J. Kochenderfer
- Abstract summary: We present a new safety validation approach that attempts to estimate the distribution over failures of an autonomous policy using approximate dynamic programming.
In both experiments, we observed an increase in the number of failures discovered compared to baseline approaches.
- Score: 37.61747231296097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An open question in autonomous driving is how best to use simulation to
validate the safety of autonomous vehicles. Existing techniques rely on
simulated rollouts, which can be inefficient for finding rare failure events,
while other techniques are designed to only discover a single failure. In this
work, we present a new safety validation approach that attempts to estimate the
distribution over failures of an autonomous policy using approximate dynamic
programming. Knowledge of this distribution allows for the efficient discovery
of many failure examples. To address the problem of scalability, we decompose
complex driving scenarios into subproblems consisting of only the ego vehicle
and one other vehicle. These subproblems can be solved with approximate dynamic
programming and their solutions are recombined to approximate the solution to
the full scenario. We apply our approach to a simple two-vehicle scenario to
demonstrate the technique as well as a more complex five-vehicle scenario to
demonstrate scalability. In both experiments, we observed an increase in the
number of failures discovered compared to baseline approaches.
Related papers
- Collision Probability Distribution Estimation via Temporal Difference Learning [0.46085106405479537]
We introduce CollisionPro, a pioneering framework designed to estimate cumulative collision probability distributions.
We formulate our framework within the context of reinforcement learning to pave the way for safety-aware agents.
A comprehensive examination of our framework is conducted using a realistic autonomous driving simulator.
arXiv Detail & Related papers (2024-07-29T13:32:42Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Adaptive Failure Search Using Critical States from Domain Experts [9.93890332477992]
Failure search may be done through logging substantial vehicle miles in either simulation or real world testing.
AST is one such method that poses the problem of failure search as a Markov decision process.
We show that the incorporation of critical states into the AST framework generates failure scenarios with increased safety violations.
arXiv Detail & Related papers (2023-04-01T18:14:41Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic
Prior [135.78858513845233]
STRIVE is a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions.
To maintain scenario plausibility, the key idea is to leverage a learned model of traffic motion in the form of a graph-based conditional VAE.
A subsequent optimization is used to find a "solution" to the scenario, ensuring it is useful to improve the given planner.
arXiv Detail & Related papers (2021-12-09T18:03:27Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Discovering Avoidable Planner Failures of Autonomous Vehicles using
Counterfactual Analysis in Behaviorally Diverse Simulation [16.86782673205523]
We introduce a planner testing framework that leverages recent progress in simulating behaviorally diverse traffic participants.
We show that our method can indeed find a wide range of critical planner failures.
arXiv Detail & Related papers (2020-11-24T09:44:23Z) - Efficient falsification approach for autonomous vehicle validation using
a parameter optimisation technique based on reinforcement learning [6.198523595657983]
The widescale deployment of Autonomous Vehicles (AV) appears to be imminent despite many safety challenges that are yet to be resolved.
The uncertainties in the behaviour of the traffic participants and the dynamic world cause reactions in advanced autonomous systems.
This paper presents an efficient falsification method to evaluate the System Under Test.
arXiv Detail & Related papers (2020-11-16T02:56:13Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Towards Automated Safety Coverage and Testing for Autonomous Vehicles
with Reinforcement Learning [0.3683202928838613]
Validation puts the autonomous vehicle system to the test in scenarios or situations that the system would likely encounter in everyday driving.
We propose using reinforcement learning (RL) to generate failure examples and unexpected traffic situations for the AV software implementation.
arXiv Detail & Related papers (2020-05-22T19:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.