Towards Robust Physical-world Backdoor Attacks on Lane Detection
- URL: http://arxiv.org/abs/2405.05553v3
- Date: Mon, 1 Jul 2024 10:27:42 GMT
- Title: Towards Robust Physical-world Backdoor Attacks on Lane Detection
- Authors: Xinwei Zhang, Aishan Liu, Tianyuan Zhang, Siyuan Liang, Xianglong Liu,
- Abstract summary: Deep learning-based lane detection (LD) plays a critical role in autonomous driving systems, such as adaptive cruise control.
Existing backdoor attack methods on LD exhibit limited effectiveness in dynamic real-world scenarios.
This paper introduces BadLANE, a dynamic scene adaptation backdoor attack for LD designed to withstand changes in real-world dynamic scene factors.
- Score: 31.818933392393717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based lane detection (LD) plays a critical role in autonomous driving systems, such as adaptive cruise control. However, it is vulnerable to backdoor attacks. Existing backdoor attack methods on LD exhibit limited effectiveness in dynamic real-world scenarios, primarily because they fail to consider dynamic scene factors, including changes in driving perspectives (e.g., viewpoint transformations) and environmental conditions (e.g., weather or lighting changes). To tackle this issue, this paper introduces BadLANE, a dynamic scene adaptation backdoor attack for LD designed to withstand changes in real-world dynamic scene factors. To address the challenges posed by changing driving perspectives, we propose an amorphous trigger pattern composed of shapeless pixels. This trigger design allows the backdoor to be activated by various forms or shapes of mud spots or pollution on the road or lens, enabling adaptation to changes in vehicle observation viewpoints during driving. To mitigate the effects of environmental changes, we design a meta-learning framework to train meta-generators tailored to different environmental conditions. These generators produce meta-triggers that incorporate diverse environmental information, such as weather or lighting conditions, as the initialization of the trigger patterns for backdoor implantation, thus enabling adaptation to dynamic environments. Extensive experiments on various commonly used LD models in both digital and physical domains validate the effectiveness of our attacks, outperforming other baselines significantly (+25.15% on average in Attack Success Rate). Our codes will be available upon paper publication.
Related papers
- Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions [61.87108000328186]
Lane detection (LD) is an essential component of autonomous driving systems, providing fundamental functionalities like adaptive cruise control and automated lane centering.
Existing LD benchmarks primarily focus on evaluating common cases, neglecting the robustness of LD models against environmental illusions.
This paper studies the potential threats caused by these environmental illusions to LD and establishes the first comprehensive benchmark LanEvil.
arXiv Detail & Related papers (2024-06-03T02:12:27Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks [13.726534285661717]
This paper introduces the Embodied Laser Attack (ELA), a novel framework that dynamically tailors non-contact laser attacks.
For the perception module, ELA has innovatively developed a local perspective transformation network, based on the intrinsic prior knowledge of traffic scenes.
For the decision and control module, ELA trains an attack agent with data-driven reinforcement learning instead of adopting time-consuming algorithms.
arXiv Detail & Related papers (2023-12-15T06:16:17Z) - Dynamic Adversarial Attacks on Autonomous Driving Systems [16.657485186920102]
This paper introduces an attacking mechanism to challenge the resilience of autonomous driving systems.
We manipulate the decision-making processes of an autonomous vehicle by dynamically displaying adversarial patches on a screen mounted on another moving vehicle.
Our experiments demonstrate the first successful implementation of such dynamic adversarial attacks in real-world autonomous driving scenarios.
arXiv Detail & Related papers (2023-12-10T04:14:56Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
Optical Perturbations [17.761200546223442]
Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems.
We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples.
arXiv Detail & Related papers (2023-07-24T21:16:38Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.