Adversarial Driving Behavior Generation Incorporating Human Risk
Cognition for Autonomous Vehicle Evaluation
- URL: http://arxiv.org/abs/2310.00029v2
- Date: Sat, 14 Oct 2023 14:56:33 GMT
- Title: Adversarial Driving Behavior Generation Incorporating Human Risk
Cognition for Autonomous Vehicle Evaluation
- Authors: Zhen Liu, Hang Gao, Hao Ma, Shuo Cai, Yunfeng Hu, Ting Qu, Hong Chen,
Xun Gong
- Abstract summary: This paper focuses on the development of a novel framework for generating adversarial driving behavior of background vehicle.
The adversarial behavior is learned by a reinforcement learning (RL) approach incorporated with the cumulative prospect theory (CPT)
A comparative case study regarding the cut-in scenario is conducted on a high fidelity Hardware-in-the-Loop (HiL) platform.
- Score: 23.476885023669524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous vehicle (AV) evaluation has been the subject of increased interest
in recent years both in industry and in academia. This paper focuses on the
development of a novel framework for generating adversarial driving behavior of
background vehicle interfering against the AV to expose effective and rational
risky events. Specifically, the adversarial behavior is learned by a
reinforcement learning (RL) approach incorporated with the cumulative prospect
theory (CPT) which allows representation of human risk cognition. Then, the
extended version of deep deterministic policy gradient (DDPG) technique is
proposed for training the adversarial policy while ensuring training stability
as the CPT action-value function is leveraged. A comparative case study
regarding the cut-in scenario is conducted on a high fidelity
Hardware-in-the-Loop (HiL) platform and the results demonstrate the adversarial
effectiveness to infer the weakness of the tested AV.
Related papers
- Confidence-Guided Human-AI Collaboration: Reinforcement Learning with Distributional Proxy Value Propagation for Autonomous Driving [1.4063588986150455]
This paper develops a confidence-guided human-AI collaboration (C-HAC) strategy to overcome these limitations.<n>C-HAC achieves rapid and stable learning of human-guided policies with minimal human interaction.<n> Experiments across diverse driving scenarios reveal that C-HAC significantly outperforms conventional methods in terms of safety, efficiency, and overall performance.
arXiv Detail & Related papers (2025-06-04T04:31:10Z) - On the Effectiveness of Adversarial Training on Malware Classifiers [14.069462668836328]
Adversarial Training (AT) has been widely applied to harden learning-based classifiers against adversarial evasive attacks.
Previous work seems to suggest robustness is a task-dependent property of AT.
We argue it is a more complex problem that requires exploring AT and the intertwined roles played by certain factors within data.
arXiv Detail & Related papers (2024-12-24T06:55:53Z) - VCAT: Vulnerability-aware and Curiosity-driven Adversarial Training for Enhancing Autonomous Vehicle Robustness [18.27802330689405]
Vulnerability-aware and Curiosity-driven Adversarial Training (VCAT) is a framework to train autonomous vehicles (AVs) against malicious attacks.
VCAT uses a surrogate network to fit the value function of the AV victim, providing dense information about the victim's inherent vulnerabilities.
In the victim defense training phase, the AV is trained in critical scenarios in which the pretrained attacker is positioned around the victim to generate attack behaviors.
arXiv Detail & Related papers (2024-09-19T14:53:02Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Autonomous and Human-Driven Vehicles Interacting in a Roundabout: A
Quantitative and Qualitative Evaluation [34.67306374722473]
We learn a policy to minimize traffic jams and to minimize pollution in a roundabout in Milan, Italy.
We qualitatively evaluate the learned policy using a cutting-edge cockpit to assess its performance in near-real-world conditions.
Our findings show that human-driven vehicles benefit from optimizing AVs dynamics.
arXiv Detail & Related papers (2023-09-15T09:02:16Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Robust Reinforcement Learning on State Observations with Learned Optimal
Adversary [86.0846119254031]
We study the robustness of reinforcement learning with adversarially perturbed state observations.
With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found.
For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones.
arXiv Detail & Related papers (2021-01-21T05:38:52Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.