Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses
- URL: http://arxiv.org/abs/2104.01789v1
- Date: Mon, 5 Apr 2021 06:31:47 GMT
- Title: Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses
- Authors: Yao Deng, Tiehua Zhang, Guannan Lou, Xi Zheng, Jiong Jin, Qing-Long
Han
- Abstract summary: This survey provides a thorough analysis of different attacks that may jeopardize autonomous driving systems.
It covers adversarial attacks for various deep learning models and attacks in both physical and cyber context.
Some promising research directions are suggested in order to improve deep learning-based autonomous driving safety.
- Score: 13.161104978510943
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid development of artificial intelligence, especially deep learning
technology, has advanced autonomous driving systems (ADSs) by providing precise
control decisions to counterpart almost any driving event, spanning from
anti-fatigue safe driving to intelligent route planning. However, ADSs are
still plagued by increasing threats from different attacks, which could be
categorized into physical attacks, cyberattacks and learning-based adversarial
attacks. Inevitably, the safety and security of deep learning-based autonomous
driving are severely challenged by these attacks, from which the
countermeasures should be analyzed and studied comprehensively to mitigate all
potential risks. This survey provides a thorough analysis of different attacks
that may jeopardize ADSs, as well as the corresponding state-of-the-art defense
mechanisms. The analysis is unrolled by taking an in-depth overview of each
step in the ADS workflow, covering adversarial attacks for various deep
learning models and attacks in both physical and cyber context. Furthermore,
some promising research directions are suggested in order to improve deep
learning-based autonomous driving safety, including model robustness training,
model testing and verification, and anomaly detection based on cloud/edge
servers.
Related papers
- A Survey on Adversarial Robustness of LiDAR-based Machine Learning Perception in Autonomous Vehicles [0.0]
This survey focuses on the intersection of Adversarial Machine Learning (AML) and autonomous systems.
We comprehensively explore the threat landscape, encompassing cyber-attacks on sensors and adversarial perturbations.
This paper endeavors to present a concise overview of the challenges and advances in securing autonomous driving systems against adversarial threats.
arXiv Detail & Related papers (2024-11-21T01:26:52Z) - Navigating Threats: A Survey of Physical Adversarial Attacks on LiDAR Perception Systems in Autonomous Vehicles [4.4538254463902645]
LiDAR systems are vulnerable to adversarial attacks, which pose significant challenges to the safety and robustness of autonomous vehicles.
This survey presents a review of the current research landscape on physical adversarial attacks targeting LiDAR-based perception systems.
We identify critical challenges and highlight gaps in existing attacks for LiDAR-based systems.
arXiv Detail & Related papers (2024-09-30T15:50:36Z) - Attack End-to-End Autonomous Driving through Module-Wise Noise [4.281151553151594]
In this paper, we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model.
We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection.
We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods.
arXiv Detail & Related papers (2024-09-12T02:19:16Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Evaluating Adversarial Attacks on Driving Safety in Vision-Based
Autonomous Vehicles [21.894836150974093]
In recent years, many deep learning models have been adopted in autonomous driving.
Recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models.
arXiv Detail & Related papers (2021-08-06T04:52:09Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.