Evaluating Adversarial Attacks on Driving Safety in Vision-Based
Autonomous Vehicles
- URL: http://arxiv.org/abs/2108.02940v1
- Date: Fri, 6 Aug 2021 04:52:09 GMT
- Title: Evaluating Adversarial Attacks on Driving Safety in Vision-Based
Autonomous Vehicles
- Authors: Jindi Zhang, Yang Lou, Jianping Wang, Kui Wu, Kejie Lu, Xiaohua Jia
- Abstract summary: In recent years, many deep learning models have been adopted in autonomous driving.
Recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models.
- Score: 21.894836150974093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, many deep learning models have been adopted in autonomous
driving. At the same time, these models introduce new vulnerabilities that may
compromise the safety of autonomous vehicles. Specifically, recent studies have
demonstrated that adversarial attacks can cause a significant decline in
detection precision of deep learning-based 3D object detection models. Although
driving safety is the ultimate concern for autonomous driving, there is no
comprehensive study on the linkage between the performance of deep learning
models and the driving safety of autonomous vehicles under adversarial attacks.
In this paper, we investigate the impact of two primary types of adversarial
attacks, perturbation attacks and patch attacks, on the driving safety of
vision-based autonomous vehicles rather than the detection precision of deep
learning models. In particular, we consider two state-of-the-art models in
vision-based 3D object detection, Stereo R-CNN and DSGN. To evaluate driving
safety, we propose an end-to-end evaluation framework with a set of driving
safety performance metrics. By analyzing the results of our extensive
evaluation experiments, we find that (1) the attack's impact on the driving
safety of autonomous vehicles and the attack's impact on the precision of 3D
object detectors are decoupled, and (2) the DSGN model demonstrates stronger
robustness to adversarial attacks than the Stereo R-CNN model. In addition, we
further investigate the causes behind the two findings with an ablation study.
The findings of this paper provide a new perspective to evaluate adversarial
attacks and guide the selection of deep learning models in autonomous driving.
Related papers
- Attack End-to-End Autonomous Driving through Module-Wise Noise [4.281151553151594]
In this paper, we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model.
We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection.
We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods.
arXiv Detail & Related papers (2024-09-12T02:19:16Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
Against Adversarial Attacks [22.054275309336]
Deep learning models are not trustworthy enough because of their limited robustness against adversarial attacks.
We first construct a general threat model from different perspectives and then comprehensively review the latest progress of both 2D and 3D adversarial attacks.
We are the first to systematically investigate adversarial attacks for 3D models, a flourishing field applied to many real-world applications.
arXiv Detail & Related papers (2023-10-01T10:16:33Z) - ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure
Events [1.84926694477846]
We propose a black-box testing framework that uses offline trajectories first to analyze the existing behavior of autonomous vehicles.
Our experiment shows an increase in 35, 23, 48, and 50% in the occurrences of vehicle collision, road object collision, pedestrian collision, and offroad steering events.
arXiv Detail & Related papers (2023-08-28T13:09:00Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual
Patterns [18.694795507945603]
Recent studies demonstrated the vulnerability of control policies learned through deep reinforcement learning against adversarial attacks.
This paper investigates the feasibility of targeted attacks through visually learned patterns placed on physical object in the environment.
arXiv Detail & Related papers (2021-09-16T04:59:06Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses [13.161104978510943]
This survey provides a thorough analysis of different attacks that may jeopardize autonomous driving systems.
It covers adversarial attacks for various deep learning models and attacks in both physical and cyber context.
Some promising research directions are suggested in order to improve deep learning-based autonomous driving safety.
arXiv Detail & Related papers (2021-04-05T06:31:47Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.