Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack
- URL: http://arxiv.org/abs/2009.06701v2
- Date: Sun, 13 Jun 2021 22:38:38 GMT
- Title: Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack
- Authors: Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi
Alfred Chen
- Abstract summary: We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
- Score: 38.3805893581568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated Lane Centering (ALC) systems are convenient and widely deployed
today, but also highly security and safety critical. In this work, we are the
first to systematically study the security of state-of-the-art deep learning
based ALC systems in their designed operational domains under physical-world
adversarial attacks. We formulate the problem with a safety-critical attack
goal, and a novel and domain-specific attack vector: dirty road patches. To
systematically generate the attack, we adopt an optimization-based approach and
overcome domain-specific design challenges such as camera frame
inter-dependencies due to attack-influenced vehicle control, and the lack of
objective function design for lane detection models.
We evaluate our attack on a production ALC using 80 scenarios from real-world
driving traces. The results show that our attack is highly effective with over
97.5% success rates and less than 0.903 sec average success time, which is
substantially lower than the average driver reaction time. This attack is also
found (1) robust to various real-world factors such as lighting conditions and
view angles, (2) general to different model designs, and (3) stealthy from the
driver's view. To understand the safety impacts, we conduct experiments using
software-in-the-loop simulation and attack trace injection in a real vehicle.
The results show that our attack can cause a 100% collision rate in different
scenarios, including when tested with common safety features such as automatic
emergency braking. We also evaluate and discuss defenses.
Related papers
- Mitigation of Camouflaged Adversarial Attacks in Autonomous Vehicles--A Case Study Using CARLA Simulator [3.1006820631993515]
We develop camera-camouflaged adversarial attacks targeting traffic sign recognition in AVs.
The results show that such an attack can delay the auto-braking response to the stop sign, resulting in potential safety issues.
The proposed attack and defense methods are applicable to other end-to-end trained autonomous cyber-physical systems.
arXiv Detail & Related papers (2025-02-03T17:30:43Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.
We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.
We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - Less is More: A Stealthy and Efficient Adversarial Attack Method for DRL-based Autonomous Driving Policies [2.9965913883475137]
We present a stealthy and efficient adversarial attack method for DRL-based autonomous driving policies.
We train the adversary to learn the optimal policy for attacking at critical moments without domain knowledge.
Our method achieves more than 90% collision rate within three attacks in most cases.
arXiv Detail & Related papers (2024-12-04T06:11:09Z) - Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems [8.561553195784017]
This paper evaluates the security of the deep neural network based ACC systems under runtime perception attacks.
We present a context-aware strategy for the selection of the most critical times for triggering the attacks.
We evaluate the effectiveness of the proposed attack using an actual vehicle, a publicly available driving dataset, and a realistic simulation platform.
arXiv Detail & Related papers (2023-07-18T03:12:03Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement
Learning-based Traffic Congestion Control Systems [16.01681914880077]
We explore the backdooring/trojanning of DRL-based AV controllers.
Malicious actions include vehicle deceleration and acceleration to cause stop-and-go traffic waves to emerge.
Experiments show that the backdoored model does not compromise normal operation performance.
arXiv Detail & Related papers (2020-03-17T08:20:43Z) - Security of Deep Learning based Lane Keeping System under Physical-World
Adversarial Attack [38.3805893581568]
Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical.
In this work, we design and implement the first systematic approach to attack real-world DNN-based LKASes.
We identify dirty road patches as a novel and domain-specific threat model for practicality and stealthiness.
arXiv Detail & Related papers (2020-03-03T20:35:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.