CRASH: Challenging Reinforcement-Learning Based Adversarial Scenarios For Safety Hardening
- URL: http://arxiv.org/abs/2411.16996v1
- Date: Tue, 26 Nov 2024 00:00:27 GMT
- Title: CRASH: Challenging Reinforcement-Learning Based Adversarial Scenarios For Safety Hardening
- Authors: Amar Kulkarni, Shangtong Zhang, Madhur Behl,
- Abstract summary: This paper introduces CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening.
First CRASH can control adversarial Non Player Character (NPC) agents in an AV simulator to automatically induce collisions with the Ego vehicle.
We also propose a novel approach, that we term safety hardening, which iteratively refines the motion planner by simulating improvement scenarios against adversarial agents.
- Score: 16.305837225117607
- License:
- Abstract: Ensuring the safety of autonomous vehicles (AVs) requires identifying rare but critical failure cases that on-road testing alone cannot discover. High-fidelity simulations provide a scalable alternative, but automatically generating realistic and diverse traffic scenarios that can effectively stress test AV motion planners remains a key challenge. This paper introduces CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening - an adversarial deep reinforcement learning framework to address this issue. First CRASH can control adversarial Non Player Character (NPC) agents in an AV simulator to automatically induce collisions with the Ego vehicle, falsifying its motion planner. We also propose a novel approach, that we term safety hardening, which iteratively refines the motion planner by simulating improvement scenarios against adversarial agents, leveraging the failure cases to strengthen the AV stack. CRASH is evaluated on a simplified two-lane highway scenario, demonstrating its ability to falsify both rule-based and learning-based planners with collision rates exceeding 90%. Additionally, safety hardening reduces the Ego vehicle's collision rate by 26%. While preliminary, these results highlight RL-based safety hardening as a promising approach for scenario-driven simulation testing for autonomous vehicles.
Related papers
- Foundation Models for Rapid Autonomy Validation [4.417336418010182]
A key challenge is that an autonomous vehicle requires testing in every kind of driving scenario it could encounter.
We propose the use of a behavior foundation model, specifically a masked autoencoder (MAE), trained to reconstruct driving scenarios.
arXiv Detail & Related papers (2024-10-22T15:32:43Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - PAFOT: A Position-Based Approach for Finding Optimal Tests of Autonomous Vehicles [4.243926243206826]
This paper proposes PAFOT, a position-based approach testing framework.
PAFOT generates adversarial driving scenarios to expose safety violations of Automated Driving Systems.
Experiments show PAFOT can effectively generate safety-critical scenarios to crash ADSs and is able to find collisions in a short simulation time.
arXiv Detail & Related papers (2024-05-06T10:04:40Z) - A novel framework for adaptive stress testing of autonomous vehicles in
highways [3.2112502548606825]
We propose a novel framework to explore corner cases that can result in safety concerns in a highway traffic scenario.
We develop a new reward function for DRL to guide the AST in identifying crash scenarios based on the collision probability estimate.
The proposed framework is further integrated with a new driving model enabling us to create more realistic traffic scenarios.
arXiv Detail & Related papers (2024-02-19T04:02:40Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Don't Get Yourself into Trouble! Risk-aware Decision-Making for
Autonomous Vehicles [4.94950858749529]
We show that risk could be characterized by two components: 1) the probability of an undesirable outcome and 2) an estimate of how undesirable the outcome is (loss)
We developed a risk-based decision-making framework for the autonomous vehicle that integrates the high-level risk-based path planning with the reinforcement learning-based low-level control.
This work can improve safety by allowing an autonomous vehicle to one day avoid and react to risky situations.
arXiv Detail & Related papers (2021-06-08T18:24:02Z) - An NCAP-like Safety Indicator for Self-Driving Cars [2.741266294612776]
This paper proposes a mechanism to assess the safety of autonomous cars.
It assesses the car's safety in scenarios where the car must avoid collision with an adversary.
The safety measure, called Safe-Kamikaze Distance, computes the average similarity between sets of safe adversary's trajectories and kamikaze trajectories close to the safe trajectories.
arXiv Detail & Related papers (2021-04-02T02:39:53Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.