Attack End-to-End Autonomous Driving through Module-Wise Noise
- URL: http://arxiv.org/abs/2409.07706v1
- Date: Thu, 12 Sep 2024 02:19:16 GMT
- Title: Attack End-to-End Autonomous Driving through Module-Wise Noise
- Authors: Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, Jiaqi Kang,
- Abstract summary: In this paper, we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model.
We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection.
We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods.
- Score: 4.281151553151594
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With recent breakthroughs in deep neural networks, numerous tasks within autonomous driving have exhibited remarkable performance. However, deep learning models are susceptible to adversarial attacks, presenting significant security risks to autonomous driving systems. Presently, end-to-end architectures have emerged as the predominant solution for autonomous driving, owing to their collaborative nature across different tasks. Yet, the implications of adversarial attacks on such models remain relatively unexplored. In this paper, we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model for the first time. We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection. We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods. We trust that our research will offer fresh insights into ensuring the safety and reliability of autonomous driving systems.
Related papers
- Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Dynamic Adversarial Attacks on Autonomous Driving Systems [16.657485186920102]
This paper introduces an attacking mechanism to challenge the resilience of autonomous driving systems.
We manipulate the decision-making processes of an autonomous vehicle by dynamically displaying adversarial patches on a screen mounted on another moving vehicle.
Our experiments demonstrate the first successful implementation of such dynamic adversarial attacks in real-world autonomous driving scenarios.
arXiv Detail & Related papers (2023-12-10T04:14:56Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Isolating and Leveraging Controllable and Noncontrollable Visual
Dynamics in World Models [65.97707691164558]
We present Iso-Dream, which improves the Dream-to-Control framework in two aspects.
First, by optimizing inverse dynamics, we encourage world model to learn controllable and noncontrollable sources.
Second, we optimize the behavior of the agent on the decoupled latent imaginations of the world model.
arXiv Detail & Related papers (2022-05-27T08:07:39Z) - Evaluating Adversarial Attacks on Driving Safety in Vision-Based
Autonomous Vehicles [21.894836150974093]
In recent years, many deep learning models have been adopted in autonomous driving.
Recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models.
arXiv Detail & Related papers (2021-08-06T04:52:09Z) - Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses [13.161104978510943]
This survey provides a thorough analysis of different attacks that may jeopardize autonomous driving systems.
It covers adversarial attacks for various deep learning models and attacks in both physical and cyber context.
Some promising research directions are suggested in order to improve deep learning-based autonomous driving safety.
arXiv Detail & Related papers (2021-04-05T06:31:47Z) - End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering [12.11406399284803]
We propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules.
Our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.
arXiv Detail & Related papers (2021-02-27T22:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.