Reactive and Safe Road User Simulations using Neural Barrier
Certificates
- URL: http://arxiv.org/abs/2109.06689v1
- Date: Tue, 14 Sep 2021 13:45:37 GMT
- Title: Reactive and Safe Road User Simulations using Neural Barrier
Certificates
- Authors: Yue Meng, Zengyi Qin, Chuchu Fan
- Abstract summary: We propose a reactive agent model which can ensure safety without comprising the original purposes.
Our learned road user simulation models can achieve a significant improvement in safety.
Our learned reactive agents are shown to generalize better to unseen traffic conditions.
- Score: 9.961324632236499
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reactive and safe agent modelings are important for nowadays traffic
simulator designs and safe planning applications. In this work, we proposed a
reactive agent model which can ensure safety without comprising the original
purposes, by learning only high-level decisions from expert data and a
low-level decentralized controller guided by the jointly learned decentralized
barrier certificates. Empirical results show that our learned road user
simulation models can achieve a significant improvement in safety comparing to
state-of-the-art imitation learning and pure control-based methods, while being
similar to human agents by having smaller errors to the expert data. Moreover,
our learned reactive agents are shown to generalize better to unseen traffic
conditions, and react better to other road users and therefore can help
understand challenging planning problems pragmatically.
Related papers
- Improving Agent Behaviors with RL Fine-tuning for Autonomous Driving [17.27549891731047]
We improve the reliability of agent behaviors by closed-loop fine-tuning of behavior models with reinforcement learning.
Our method demonstrates improved overall performance, as well as improved targeted metrics such as collision rate.
We present a novel policy evaluation benchmark to directly assess the ability of simulated agents to measure the quality of autonomous vehicle planners.
arXiv Detail & Related papers (2024-09-26T23:40:33Z) - Traffic expertise meets residual RL: Knowledge-informed model-based residual reinforcement learning for CAV trajectory control [1.5361702135159845]
This paper introduces a knowledge-informed model-based residual reinforcement learning framework.
It integrates traffic expert knowledge into a virtual environment model, employing the Intelligent Driver Model (IDM) for basic dynamics and neural networks for residual dynamics.
We propose a novel strategy that combines traditional control methods with residual RL, facilitating efficient learning and policy optimization without the need to learn from scratch.
arXiv Detail & Related papers (2024-08-30T16:16:57Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - ConBaT: Control Barrier Transformer for Safe Policy Learning [26.023275758215423]
Control Barrier Transformer (ConBaT) is an approach that learns safe behaviors from demonstrations in a self-supervised fashion.
During deployment, we employ a lightweight online optimization to find actions that ensure future states lie within the learned safe set.
arXiv Detail & Related papers (2023-03-07T20:04:28Z) - Imitation Is Not Enough: Robustifying Imitation with Reinforcement
Learning for Challenging Driving Scenarios [147.16925581385576]
We show how imitation learning combined with reinforcement learning can substantially improve the safety and reliability of driving policies.
We train a policy on over 100k miles of urban driving data, and measure its effectiveness in test scenarios grouped by different levels of collision likelihood.
arXiv Detail & Related papers (2022-12-21T23:59:33Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems [34.945482759378734]
We employ a probabilistic approach to safety evaluation in simulation, where we are concerned with computing the probability of dangerous events.
We develop a novel rare-event simulation method that combines exploration, exploitation, and optimization techniques to find failure modes and estimate their rate of occurrence.
arXiv Detail & Related papers (2020-08-24T17:46:27Z) - Safe Reinforcement Learning via Curriculum Induction [94.67835258431202]
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.
Existing safe reinforcement learning methods make an agent rely on priors that let it avoid dangerous situations.
This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor.
arXiv Detail & Related papers (2020-06-22T10:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.