Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack
- URL: http://arxiv.org/abs/2009.06701v2
- Date: Sun, 13 Jun 2021 22:38:38 GMT
- Title: Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack
- Authors: Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin, Qi
Alfred Chen
- Abstract summary: We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
- Score: 38.3805893581568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated Lane Centering (ALC) systems are convenient and widely deployed
today, but also highly security and safety critical. In this work, we are the
first to systematically study the security of state-of-the-art deep learning
based ALC systems in their designed operational domains under physical-world
adversarial attacks. We formulate the problem with a safety-critical attack
goal, and a novel and domain-specific attack vector: dirty road patches. To
systematically generate the attack, we adopt an optimization-based approach and
overcome domain-specific design challenges such as camera frame
inter-dependencies due to attack-influenced vehicle control, and the lack of
objective function design for lane detection models.
We evaluate our attack on a production ALC using 80 scenarios from real-world
driving traces. The results show that our attack is highly effective with over
97.5% success rates and less than 0.903 sec average success time, which is
substantially lower than the average driver reaction time. This attack is also
found (1) robust to various real-world factors such as lighting conditions and
view angles, (2) general to different model designs, and (3) stealthy from the
driver's view. To understand the safety impacts, we conduct experiments using
software-in-the-loop simulation and attack trace injection in a real vehicle.
The results show that our attack can cause a 100% collision rate in different
scenarios, including when tested with common safety features such as automatic
emergency braking. We also evaluate and discuss defenses.
Related papers
- Discovering New Shadow Patterns for Black-Box Attacks on Lane Detection of Autonomous Vehicles [2.5539742994571037]
This paper introduces a novel approach to generate physical-world adversarial examples (AEs)
negative shadows: deceptive patterns of light on the road created by strategically blocking sunlight, which then cast artificial lane-like patterns.
Using a 20-meter negative shadow, it can direct a vehicle off-road with a 100% violation rate at speeds over 10 mph.
Other attack scenarios, such as causing collisions, can be performed with at least 30 meters of negative shadow, achieving a 60-100% success rate.
arXiv Detail & Related papers (2024-09-26T19:43:52Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems [8.561553195784017]
This paper evaluates the security of the deep neural network based ACC systems under runtime perception attacks.
We present a context-aware strategy for the selection of the most critical times for triggering the attacks.
We evaluate the effectiveness of the proposed attack using an actual vehicle, a publicly available driving dataset, and a realistic simulation platform.
arXiv Detail & Related papers (2023-07-18T03:12:03Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering [12.11406399284803]
We propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules.
Our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.
arXiv Detail & Related papers (2021-02-27T22:36:32Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement
Learning-based Traffic Congestion Control Systems [16.01681914880077]
We explore the backdooring/trojanning of DRL-based AV controllers.
Malicious actions include vehicle deceleration and acceleration to cause stop-and-go traffic waves to emerge.
Experiments show that the backdoored model does not compromise normal operation performance.
arXiv Detail & Related papers (2020-03-17T08:20:43Z) - Security of Deep Learning based Lane Keeping System under Physical-World
Adversarial Attack [38.3805893581568]
Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical.
In this work, we design and implement the first systematic approach to attack real-world DNN-based LKASes.
We identify dirty road patches as a novel and domain-specific threat model for practicality and stealthiness.
arXiv Detail & Related papers (2020-03-03T20:35:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.