Reactive and Safe Road User Simulations using Neural Barrier
Certificates
- URL: http://arxiv.org/abs/2109.06689v1
- Date: Tue, 14 Sep 2021 13:45:37 GMT
- Title: Reactive and Safe Road User Simulations using Neural Barrier
Certificates
- Authors: Yue Meng, Zengyi Qin, Chuchu Fan
- Abstract summary: We propose a reactive agent model which can ensure safety without comprising the original purposes.
Our learned road user simulation models can achieve a significant improvement in safety.
Our learned reactive agents are shown to generalize better to unseen traffic conditions.
- Score: 9.961324632236499
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reactive and safe agent modelings are important for nowadays traffic
simulator designs and safe planning applications. In this work, we proposed a
reactive agent model which can ensure safety without comprising the original
purposes, by learning only high-level decisions from expert data and a
low-level decentralized controller guided by the jointly learned decentralized
barrier certificates. Empirical results show that our learned road user
simulation models can achieve a significant improvement in safety comparing to
state-of-the-art imitation learning and pure control-based methods, while being
similar to human agents by having smaller errors to the expert data. Moreover,
our learned reactive agents are shown to generalize better to unseen traffic
conditions, and react better to other road users and therefore can help
understand challenging planning problems pragmatically.
Related papers
- SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a novel diffusion-based controllable closed-loop safety-critical simulation framework.
We develop a novel approach to simulate safety-critical scenarios through an adversarial term in the denoising process.
We validate our framework empirically using the NuScenes dataset, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - ConBaT: Control Barrier Transformer for Safe Policy Learning [26.023275758215423]
Control Barrier Transformer (ConBaT) is an approach that learns safe behaviors from demonstrations in a self-supervised fashion.
During deployment, we employ a lightweight online optimization to find actions that ensure future states lie within the learned safe set.
arXiv Detail & Related papers (2023-03-07T20:04:28Z) - Imitation Is Not Enough: Robustifying Imitation with Reinforcement
Learning for Challenging Driving Scenarios [147.16925581385576]
We show how imitation learning combined with reinforcement learning can substantially improve the safety and reliability of driving policies.
We train a policy on over 100k miles of urban driving data, and measure its effectiveness in test scenarios grouped by different levels of collision likelihood.
arXiv Detail & Related papers (2022-12-21T23:59:33Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control
Barrier Functions [63.18590014127461]
This paper introduces a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We study the feasibility of the resulting robust safety-critical controller.
We then use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems [34.945482759378734]
We employ a probabilistic approach to safety evaluation in simulation, where we are concerned with computing the probability of dangerous events.
We develop a novel rare-event simulation method that combines exploration, exploitation, and optimization techniques to find failure modes and estimate their rate of occurrence.
arXiv Detail & Related papers (2020-08-24T17:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.