Discovering New Shadow Patterns for Black-Box Attacks on Lane Detection of Autonomous Vehicles
- URL: http://arxiv.org/abs/2409.18248v2
- Date: Sat, 14 Jun 2025 03:08:28 GMT
- Title: Discovering New Shadow Patterns for Black-Box Attacks on Lane Detection of Autonomous Vehicles
- Authors: Pedram MohajerAnsari, Amir Salarpour, Jan de Voor, Alkim Domeke, Arkajyoti Mitra, Grace Johnson, Habeeb Olufowobi, Mohammad Hamad, Mert D. Pese,
- Abstract summary: We present a novel physical-world attack on autonomous vehicle lane detection systems.<n>Our method is entirely passive, power-free, and inconspicuous to human observers.<n>A user study confirms the attack's stealthiness, with 83.6% of participants failing to detect it during driving tasks.
- Score: 2.360491397983081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel physical-world attack on autonomous vehicle (AV) lane detection systems that leverages negative shadows -- bright, lane-like patterns projected by passively redirecting sunlight through occluders. These patterns exploit intensity-based heuristics in modern lane detection (LD) algorithms, causing AVs to misclassify them as genuine lane markings. Unlike prior attacks, our method is entirely passive, power-free, and inconspicuous to human observers, enabling legal and stealthy deployment in public environments. Through simulation, physical testbed, and controlled field evaluations, we demonstrate that negative shadows can cause up to 100% off-road deviation or collision rates in specific scenarios; for example, a 20-meter shadow leads to complete off-road exits at speeds above 10 mph, while 30-meter shadows trigger consistent lane confusion and collisions. A user study confirms the attack's stealthiness, with 83.6% of participants failing to detect it during driving tasks. To mitigate this threat, we propose Luminosity Filter Pre-processing, a lightweight defense that reduces attack success by 87% through brightness normalization and selective filtering. Our findings expose a critical vulnerability in current LD systems and underscore the need for robust perception defenses against passive, real-world attacks.
Related papers
- ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving [30.286501966393388]
A digital hijacking attack has been proposed to cause dangerous driving scenarios.
We introduce a novel physical-world adversarial patch attack, ControlLoc, designed to exploit hijacking vulnerabilities in entire Autonomous Driving (AD) visual perception.
arXiv Detail & Related papers (2024-06-09T14:53:50Z) - LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions [61.87108000328186]
Lane detection (LD) is an essential component of autonomous driving systems, providing fundamental functionalities like adaptive cruise control and automated lane centering.
Existing LD benchmarks primarily focus on evaluating common cases, neglecting the robustness of LD models against environmental illusions.
This paper studies the potential threats caused by these environmental illusions to LD and establishes the first comprehensive benchmark LanEvil.
arXiv Detail & Related papers (2024-06-03T02:12:27Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Invisible Reflections: Leveraging Infrared Laser Reflections to Target
Traffic Sign Perception [25.566091959509986]
Road signs indicate locally active rules, such as speed limits and requirements to yield or stop.
Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation.
We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors.
arXiv Detail & Related papers (2024-01-07T21:22:42Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against
Shadow-Based Adversarial Attacks [2.4254101826561842]
We propose a robust, fast, and generalizable method to defend against shadow attacks in the context of road sign recognition.
We empirically show its robustness against shadow attacks, and reformulate the problem to show its similarity $varepsilon$-based attacks.
arXiv Detail & Related papers (2022-08-18T00:19:01Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Physical Attack on Monocular Depth Estimation with Optimal Adversarial
Patches [18.58673451901394]
We develop an attack against learning-based Monocular Depth Estimation (MDE)
We balance the stealth and effectiveness of our attack with object-oriented adversarial design, sensitive region localization, and natural style camouflage.
Experimental results show that our method can generate stealthy, effective, and robust adversarial patches for different target objects and models.
arXiv Detail & Related papers (2022-07-11T08:59:09Z) - A Hybrid Defense Method against Adversarial Attacks on Traffic Sign
Classifiers in Autonomous Vehicles [4.585587646404074]
Adversarial attacks can make deep neural network (DNN) models predict incorrect output labels for autonomous vehicles (AVs)
This study develops a resilient traffic sign classifier for AVs that uses a hybrid defense method.
We find that our hybrid defense method achieves 99% average traffic sign classification accuracy for the no attack scenario and 88% average traffic sign classification accuracy for all attack scenarios.
arXiv Detail & Related papers (2022-04-25T02:13:31Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Model-Agnostic Defense for Lane Detection against Adversarial Attack [0.0]
Recent work on adversarial road patches have successfully induced perception of lane lines with arbitrary form.
We propose a modular lane verification system that can catch such threats before the autonomous driving system is misled.
Our experiments show that implementing the system with a simple convolutional neural network (CNN) can defend against a wide gamut of attacks on lane detection models.
arXiv Detail & Related papers (2021-03-01T00:05:50Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.