Optimal Actuator Attacks on Autonomous Vehicles Using Reinforcement Learning
- URL: http://arxiv.org/abs/2502.07839v1
- Date: Tue, 11 Feb 2025 03:01:05 GMT
- Title: Optimal Actuator Attacks on Autonomous Vehicles Using Reinforcement Learning
- Authors: Pengyu Wang, Jialu Li, Ling Shi,
- Abstract summary: We propose a reinforcement learning (RL)-based approach for designing optimal stealthy integrity attacks on AV actuators.<n>We also analyze the limitations of state-of-the-art RL-based secure controllers to counter such attacks.
- Score: 11.836584342902492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing prevalence of autonomous vehicles (AVs), their vulnerability to various types of attacks has grown, presenting significant security challenges. In this paper, we propose a reinforcement learning (RL)-based approach for designing optimal stealthy integrity attacks on AV actuators. We also analyze the limitations of state-of-the-art RL-based secure controllers developed to counter such attacks. Through extensive simulation experiments, we demonstrate the effectiveness and efficiency of our proposed method.
Related papers
- AED: Automatic Discovery of Effective and Diverse Vulnerabilities for Autonomous Driving Policy with Large Language Models [7.923448458349885]
We propose a framework that uses large language models (LLMs) to automatically discover effective and diverse vulnerabilities in autonomous driving policies.
Experiments show that AED achieves a broader range of vulnerabilities and higher attack success rates compared with expert-designed rewards.
arXiv Detail & Related papers (2025-03-24T14:59:17Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.<n>We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.<n>We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - Less is More: A Stealthy and Efficient Adversarial Attack Method for DRL-based Autonomous Driving Policies [2.9965913883475137]
We present a stealthy and efficient adversarial attack method for DRL-based autonomous driving policies.<n>We train the adversary to learn the optimal policy for attacking at critical moments without domain knowledge.<n>Our method achieves more than 90% collision rate within three attacks in most cases.
arXiv Detail & Related papers (2024-12-04T06:11:09Z) - Intercepting Unauthorized Aerial Robots in Controlled Airspace Using Reinforcement Learning [2.519319150166215]
The proliferation of unmanned aerial vehicles (UAVs) in controlled airspace presents significant risks.
This work addresses the need for robust, adaptive systems capable of managing such threats through the use of Reinforcement Learning (RL)
We present a novel approach utilizing RL to train fixed-wing UAV pursuer agents for intercepting dynamic evader targets.
arXiv Detail & Related papers (2024-07-09T14:45:47Z) - Adversarial Markov Games: On Adaptive Decision-Based Attacks and Defenses [21.759075171536388]
We show how attacks but also defenses can benefit by it and by learning from each other through interaction.
We demonstrate that active defenses, which control how the system responds, are a necessary complement to model hardening when facing decision-based attacks.
We lay out effective strategies in ensuring the robustness of ML-based systems deployed in the real-world.
arXiv Detail & Related papers (2023-12-20T21:24:52Z) - Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks [13.726534285661717]
This paper introduces the Embodied Laser Attack (ELA), a novel framework that dynamically tailors non-contact laser attacks.
For the perception module, ELA has innovatively developed a local perspective transformation network, based on the intrinsic prior knowledge of traffic scenes.
For the decision and control module, ELA trains an attack agent with data-driven reinforcement learning instead of adopting time-consuming algorithms.
arXiv Detail & Related papers (2023-12-15T06:16:17Z) - Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems [8.561553195784017]
This paper evaluates the security of the deep neural network based ACC systems under runtime perception attacks.<n>We present a context-aware strategy for the selection of the most critical times for triggering the attacks.<n>We evaluate the effectiveness of the proposed attack using an actual vehicle, a publicly available driving dataset, and a realistic simulation platform.
arXiv Detail & Related papers (2023-07-18T03:12:03Z) - Self-Improving Safety Performance of Reinforcement Learning Based
Driving with Black-Box Verification Algorithms [0.0]
We propose a self-improving artificial intelligence system to enhance the safety performance of reinforcement learning (RL)-based autonomous driving (AD) agents.
Our approach efficiently discovers safety failures of action decisions in RL-based adaptive cruise control (ACC) applications.
arXiv Detail & Related papers (2022-10-29T11:34:17Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.