Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses
- URL: http://arxiv.org/abs/2104.01789v1
- Date: Mon, 5 Apr 2021 06:31:47 GMT
- Title: Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses
- Authors: Yao Deng, Tiehua Zhang, Guannan Lou, Xi Zheng, Jiong Jin, Qing-Long
Han
- Abstract summary: This survey provides a thorough analysis of different attacks that may jeopardize autonomous driving systems.
It covers adversarial attacks for various deep learning models and attacks in both physical and cyber context.
Some promising research directions are suggested in order to improve deep learning-based autonomous driving safety.
- Score: 13.161104978510943
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid development of artificial intelligence, especially deep learning
technology, has advanced autonomous driving systems (ADSs) by providing precise
control decisions to counterpart almost any driving event, spanning from
anti-fatigue safe driving to intelligent route planning. However, ADSs are
still plagued by increasing threats from different attacks, which could be
categorized into physical attacks, cyberattacks and learning-based adversarial
attacks. Inevitably, the safety and security of deep learning-based autonomous
driving are severely challenged by these attacks, from which the
countermeasures should be analyzed and studied comprehensively to mitigate all
potential risks. This survey provides a thorough analysis of different attacks
that may jeopardize ADSs, as well as the corresponding state-of-the-art defense
mechanisms. The analysis is unrolled by taking an in-depth overview of each
step in the ADS workflow, covering adversarial attacks for various deep
learning models and attacks in both physical and cyber context. Furthermore,
some promising research directions are suggested in order to improve deep
learning-based autonomous driving safety, including model robustness training,
model testing and verification, and anomaly detection based on cloud/edge
servers.
Related papers
- Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.
We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.
We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - Is Your Autonomous Vehicle Safe? Understanding the Threat of Electromagnetic Signal Injection Attacks on Traffic Scene Perception [3.8225514249914734]
Electromagnetic Signal Injection Attacks (ESIA) can distort the images captured by autonomous vehicles.
Our research analyzes the performance of different models underA, revealing their vulnerabilities to the attacks.
Our research provides a comprehensive simulation and evaluation framework, aiming to enhance the development of more robust AI models.
arXiv Detail & Related papers (2025-01-09T13:44:42Z) - A Comprehensive Review of Adversarial Attacks on Machine Learning [0.5104264623877593]
This research provides a comprehensive overview of adversarial attacks on AI and ML models, exploring various attack types, techniques, and their potential harms.
To gain practical insights, we employ the Adversarial Robustness Toolbox (ART) library to simulate these attacks on real-world use cases, such as self-driving cars.
arXiv Detail & Related papers (2024-12-16T02:27:54Z) - A Survey on Adversarial Robustness of LiDAR-based Machine Learning Perception in Autonomous Vehicles [0.0]
This survey focuses on the intersection of Adversarial Machine Learning (AML) and autonomous systems.
We comprehensively explore the threat landscape, encompassing cyber-attacks on sensors and adversarial perturbations.
This paper endeavors to present a concise overview of the challenges and advances in securing autonomous driving systems against adversarial threats.
arXiv Detail & Related papers (2024-11-21T01:26:52Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Evaluating Adversarial Attacks on Driving Safety in Vision-Based
Autonomous Vehicles [21.894836150974093]
In recent years, many deep learning models have been adopted in autonomous driving.
Recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models.
arXiv Detail & Related papers (2021-08-06T04:52:09Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.