FREA: Feasibility-Guided Generation of Safety-Critical Scenarios with Reasonable Adversariality
- URL: http://arxiv.org/abs/2406.02983v1
- Date: Wed, 5 Jun 2024 06:26:15 GMT
- Title: FREA: Feasibility-Guided Generation of Safety-Critical Scenarios with Reasonable Adversariality
- Authors: Keyu Chen, Yuheng Lei, Hao Cheng, Haoran Wu, Wenchao Sun, Sifa Zheng,
- Abstract summary: We introduce FREA, a novel safety-critical scenarios generation method.
It incorporates the Largest Feasible Region (LFR) of AV as guidance to ensure the reasonableness of the adversarial scenarios.
Experiments show that FREA can effectively generate safety-critical scenarios, yielding considerable near-miss events.
- Score: 13.240598841087841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating safety-critical scenarios, which are essential yet difficult to collect at scale, offers an effective method to evaluate the robustness of autonomous vehicles (AVs). Existing methods focus on optimizing adversariality while preserving the naturalness of scenarios, aiming to achieve a balance through data-driven approaches. However, without an appropriate upper bound for adversariality, the scenarios might exhibit excessive adversariality, potentially leading to unavoidable collisions. In this paper, we introduce FREA, a novel safety-critical scenarios generation method that incorporates the Largest Feasible Region (LFR) of AV as guidance to ensure the reasonableness of the adversarial scenarios. Concretely, FREA initially pre-calculates the LFR of AV from offline datasets. Subsequently, it learns a reasonable adversarial policy that controls critical background vehicles (CBVs) in the scene to generate adversarial yet AV-feasible scenarios by maximizing a novel feasibility-dependent objective function. Extensive experiments illustrate that FREA can effectively generate safety-critical scenarios, yielding considerable near-miss events while ensuring AV's feasibility. Generalization analysis also confirms the robustness of FREA in AV testing across various surrogate AV methods and traffic environments.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Probabilistic Perspectives on Error Minimization in Adversarial Reinforcement Learning [18.044879441434432]
Deep Reinforcement Learning (DRL) policies are critically vulnerable to adversarial noise in observations, posing severe risks in safety-critical scenarios.
Existing strategies to fortify RL algorithms against such adversarial perturbations generally fall into two categories.
We introduce a novel objective called Adrial Counterfactual Error (ACoE) which naturally balances optimizing value and robustness against adversarial attacks.
arXiv Detail & Related papers (2024-06-07T08:14:24Z) - A novel framework for adaptive stress testing of autonomous vehicles in
highways [3.2112502548606825]
We propose a novel framework to explore corner cases that can result in safety concerns in a highway traffic scenario.
We develop a new reward function for DRL to guide the AST in identifying crash scenarios based on the collision probability estimate.
The proposed framework is further integrated with a new driving model enabling us to create more realistic traffic scenarios.
arXiv Detail & Related papers (2024-02-19T04:02:40Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a novel diffusion-based controllable closed-loop safety-critical simulation framework.
We develop a novel approach to simulate safety-critical scenarios through an adversarial term in the denoising process.
We validate our framework empirically using the NuScenes dataset, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Continual Driving Policy Optimization with Closed-Loop Individualized
Curricula [3.171483862183451]
We develop a continuous driving policy optimization framework featuring Closed-Loop Individualized Curricula (CLIC)
CLIC frames AV Evaluation as a collision prediction task, where it estimates the chance of AV failures in these scenarios at each iteration.
We show that CLIC surpasses other curriculum-based training strategies, showing substantial improvement in managing risky scenarios.
arXiv Detail & Related papers (2023-09-25T15:14:54Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Conservative Safety Critics for Exploration [120.73241848565449]
We study the problem of safe exploration in reinforcement learning (RL)
We learn a conservative safety estimate of environment states through a critic.
We show that the proposed approach can achieve competitive task performance while incurring significantly lower catastrophic failure rates.
arXiv Detail & Related papers (2020-10-27T17:54:25Z) - Multimodal Safety-Critical Scenarios Generation for Decision-Making
Algorithms Evaluation [23.43175124406634]
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks.
We propose a flow-based multimodal safety-critical scenario generator for evaluating decisionmaking algorithms.
We evaluate six Reinforcement Learning algorithms with our generated traffic scenarios and provide empirical conclusions about their robustness.
arXiv Detail & Related papers (2020-09-16T15:16:43Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.