Learning to Collide: An Adaptive Safety-Critical Scenarios Generating
Method
- URL: http://arxiv.org/abs/2003.01197v3
- Date: Thu, 23 Jul 2020 02:08:13 GMT
- Title: Learning to Collide: An Adaptive Safety-Critical Scenarios Generating
Method
- Authors: Wenhao Ding, Baiming Chen, Minjun Xu, Ding Zhao
- Abstract summary: We propose a generative framework to create safety-critical scenarios for evaluating task algorithms.
We demonstrate that the proposed framework generates safety-critical scenarios more efficiently than grid search or human design methods.
- Score: 20.280573307366627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-tail and rare event problems become crucial when autonomous driving
algorithms are applied in the real world. For the purpose of evaluating systems
in challenging settings, we propose a generative framework to create
safety-critical scenarios for evaluating specific task algorithms. We first
represent the traffic scenarios with a series of autoregressive building blocks
and generate diverse scenarios by sampling from the joint distribution of these
blocks. We then train the generative model as an agent (or a generator) to
investigate the risky distribution parameters for a given driving algorithm
being evaluated. We regard the task algorithm as an environment (or a
discriminator) that returns a reward to the agent when a risky scenario is
generated. Through the experiments conducted on several scenarios in the
simulation, we demonstrate that the proposed framework generates
safety-critical scenarios more efficiently than grid search or human design
methods. Another advantage of this method is its adaptiveness to the routes and
parameters.
Related papers
- SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Safety-Critical Scenario Generation Via Reinforcement Learning Based
Editing [20.99962858782196]
We propose a deep reinforcement learning approach that generates safety-critical scenarios by sequential editing.
Our framework employs a reward function consisting of both risk and plausibility objectives.
Our evaluation demonstrates that the proposed method generates safety-critical scenarios of higher quality compared with previous approaches.
arXiv Detail & Related papers (2023-06-25T05:15:25Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic
Prior [135.78858513845233]
STRIVE is a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions.
To maintain scenario plausibility, the key idea is to leverage a learned model of traffic motion in the form of a graph-based conditional VAE.
A subsequent optimization is used to find a "solution" to the scenario, ensuring it is useful to improve the given planner.
arXiv Detail & Related papers (2021-12-09T18:03:27Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Discovering Avoidable Planner Failures of Autonomous Vehicles using
Counterfactual Analysis in Behaviorally Diverse Simulation [16.86782673205523]
We introduce a planner testing framework that leverages recent progress in simulating behaviorally diverse traffic participants.
We show that our method can indeed find a wide range of critical planner failures.
arXiv Detail & Related papers (2020-11-24T09:44:23Z) - Efficient falsification approach for autonomous vehicle validation using
a parameter optimisation technique based on reinforcement learning [6.198523595657983]
The widescale deployment of Autonomous Vehicles (AV) appears to be imminent despite many safety challenges that are yet to be resolved.
The uncertainties in the behaviour of the traffic participants and the dynamic world cause reactions in advanced autonomous systems.
This paper presents an efficient falsification method to evaluate the System Under Test.
arXiv Detail & Related papers (2020-11-16T02:56:13Z) - Multimodal Safety-Critical Scenarios Generation for Decision-Making
Algorithms Evaluation [23.43175124406634]
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks.
We propose a flow-based multimodal safety-critical scenario generator for evaluating decisionmaking algorithms.
We evaluate six Reinforcement Learning algorithms with our generated traffic scenarios and provide empirical conclusions about their robustness.
arXiv Detail & Related papers (2020-09-16T15:16:43Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.