Safety-Critical Scenario Generation Via Reinforcement Learning Based
Editing
- URL: http://arxiv.org/abs/2306.14131v3
- Date: Wed, 6 Mar 2024 21:42:22 GMT
- Title: Safety-Critical Scenario Generation Via Reinforcement Learning Based
Editing
- Authors: Haolan Liu, Liangjun Zhang, Siva Kumar Sastry Hari, Jishen Zhao
- Abstract summary: We propose a deep reinforcement learning approach that generates safety-critical scenarios by sequential editing.
Our framework employs a reward function consisting of both risk and plausibility objectives.
Our evaluation demonstrates that the proposed method generates safety-critical scenarios of higher quality compared with previous approaches.
- Score: 20.99962858782196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating safety-critical scenarios is essential for testing and verifying
the safety of autonomous vehicles. Traditional optimization techniques suffer
from the curse of dimensionality and limit the search space to fixed parameter
spaces. To address these challenges, we propose a deep reinforcement learning
approach that generates scenarios by sequential editing, such as adding new
agents or modifying the trajectories of the existing agents. Our framework
employs a reward function consisting of both risk and plausibility objectives.
The plausibility objective leverages generative models, such as a variational
autoencoder, to learn the likelihood of the generated parameters from the
training datasets; It penalizes the generation of unlikely scenarios. Our
approach overcomes the dimensionality challenge and explores a wide range of
safety-critical scenarios. Our evaluation demonstrates that the proposed method
generates safety-critical scenarios of higher quality compared with previous
approaches.
Related papers
- SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Bridging Data-Driven and Knowledge-Driven Approaches for Safety-Critical
Scenario Generation in Automated Vehicle Validation [5.063522035689929]
Automated driving vehicles (ADV) promise to enhance driving efficiency and safety, yet they face challenges in safety-critical scenarios.
This paper investigates the complexities of employing two major scenario-generation solutions: data-driven and knowledge-driven methods.
We introduce BridgeGen, a safety-critical scenario generation framework, designed to bridge the benefits of both solutions.
arXiv Detail & Related papers (2023-11-18T02:11:14Z) - Risk-Averse Model Uncertainty for Distributionally Robust Safe
Reinforcement Learning [3.9821399546174825]
We introduce a deep reinforcement learning framework for safe decision making in uncertain environments.
We provide robustness guarantees for this framework by showing it is equivalent to a specific class of distributionally robust safe reinforcement learning problems.
In experiments on continuous control tasks with safety constraints, we demonstrate that our framework produces robust performance and safety at deployment time across a range of perturbed test environments.
arXiv Detail & Related papers (2023-01-30T00:37:06Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Lyapunov-based uncertainty-aware safe reinforcement learning [0.0]
InReinforcement learning (RL) has shown a promising performance in learning optimal policies for a variety of sequential decision-making tasks.
In many real-world RL problems, besides optimizing the main objectives, the agent is expected to satisfy a certain level of safety.
We propose a Lyapunov-based uncertainty-aware safe RL model to address these limitations.
arXiv Detail & Related papers (2021-07-29T13:08:15Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Multimodal Safety-Critical Scenarios Generation for Decision-Making
Algorithms Evaluation [23.43175124406634]
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks.
We propose a flow-based multimodal safety-critical scenario generator for evaluating decisionmaking algorithms.
We evaluate six Reinforcement Learning algorithms with our generated traffic scenarios and provide empirical conclusions about their robustness.
arXiv Detail & Related papers (2020-09-16T15:16:43Z) - Learning to Collide: An Adaptive Safety-Critical Scenarios Generating
Method [20.280573307366627]
We propose a generative framework to create safety-critical scenarios for evaluating task algorithms.
We demonstrate that the proposed framework generates safety-critical scenarios more efficiently than grid search or human design methods.
arXiv Detail & Related papers (2020-03-02T21:26:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.