CAT: Closed-loop Adversarial Training for Safe End-to-End Driving
- URL: http://arxiv.org/abs/2310.12432v1
- Date: Thu, 19 Oct 2023 02:49:31 GMT
- Title: CAT: Closed-loop Adversarial Training for Safe End-to-End Driving
- Authors: Linrui Zhang and Zhenghao Peng and Quanyi Li and Bolei Zhou
- Abstract summary: Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
- Score: 54.60865656161679
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Driving safety is a top priority for autonomous vehicles. Orthogonal to prior
work handling accident-prone traffic events by algorithm designs at the policy
level, we investigate a Closed-loop Adversarial Training (CAT) framework for
safe end-to-end driving in this paper through the lens of environment
augmentation. CAT aims to continuously improve the safety of driving agents by
training the agent on safety-critical scenarios that are dynamically generated
over time. A novel resampling technique is developed to turn log-replay
real-world driving scenarios into safety-critical ones via probabilistic
factorization, where the adversarial traffic generation is modeled as the
multiplication of standard motion prediction sub-problems. Consequently, CAT
can launch more efficient physical attacks compared to existing safety-critical
scenario generation methods and yields a significantly less computational cost
in the iterative learning pipeline. We incorporate CAT into the MetaDrive
simulator and validate our approach on hundreds of driving scenarios imported
from real-world driving datasets. Experimental results demonstrate that CAT can
effectively generate adversarial scenarios countering the agent being trained.
After training, the agent can achieve superior driving safety in both
log-replay and safety-critical traffic scenarios on the held-out test set. Code
and data are available at https://metadriverse.github.io/cat.
Related papers
- CRASH: Challenging Reinforcement-Learning Based Adversarial Scenarios For Safety Hardening [16.305837225117607]
This paper introduces CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening.
First CRASH can control adversarial Non Player Character (NPC) agents in an AV simulator to automatically induce collisions with the Ego vehicle.
We also propose a novel approach, that we term safety hardening, which iteratively refines the motion planner by simulating improvement scenarios against adversarial agents.
arXiv Detail & Related papers (2024-11-26T00:00:27Z) - ReGentS: Real-World Safety-Critical Driving Scenario Generation Made Stable [88.08120417169971]
Machine learning based autonomous driving systems often face challenges with safety-critical scenarios that are rare in real-world data.
This work explores generating safety-critical driving scenarios by modifying complex real-world regular scenarios through trajectory optimization.
Our approach addresses unrealistic diverging trajectories and unavoidable collision scenarios that are not useful for training robust planner.
arXiv Detail & Related papers (2024-09-12T08:26:33Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Bridging Data-Driven and Knowledge-Driven Approaches for Safety-Critical
Scenario Generation in Automated Vehicle Validation [5.063522035689929]
Automated driving vehicles (ADV) promise to enhance driving efficiency and safety, yet they face challenges in safety-critical scenarios.
This paper investigates the complexities of employing two major scenario-generation solutions: data-driven and knowledge-driven methods.
We introduce BridgeGen, a safety-critical scenario generation framework, designed to bridge the benefits of both solutions.
arXiv Detail & Related papers (2023-11-18T02:11:14Z) - Generating and Explaining Corner Cases Using Learnt Probabilistic Lane
Graphs [5.309950889075669]
We introduce Probabilistic Lane Graphs (PLGs) to describe a finite set of lane positions and directions in which vehicles might travel.
The structure of PLGs is learnt directly from historic traffic data.
We use reinforcement learning techniques to modify this policy to generate realistic and explainable corner case scenarios.
arXiv Detail & Related papers (2023-08-25T20:17:49Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - KING: Generating Safety-Critical Driving Scenarios for Robust Imitation
via Kinematics Gradients [39.9379344872937]
Current driving simulators exhibit na"ive behavior models for background traffic.
Hand-tuned scenarios are typically added during simulation to induce safety-critical situations.
We propose KING, which generates safety-critical driving scenarios with a 20% higher success rate than black-box optimization.
arXiv Detail & Related papers (2022-04-28T17:48:48Z) - Optimizing Trajectories for Highway Driving with Offline Reinforcement
Learning [11.970409518725491]
We propose a Reinforcement Learning-based approach to autonomous driving.
We compare the performance of our agent against four other highway driving agents.
We demonstrate that our offline trained agent, with randomly collected data, learns to drive smoothly, achieving as close as possible to the desired velocity, while outperforming the other agents.
arXiv Detail & Related papers (2022-03-21T13:13:08Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.