The Fluorescent Veil: A Stealthy and Effective Physical Adversarial Patch Against Traffic Sign Recognition
- URL: http://arxiv.org/abs/2409.12394v2
- Date: Thu, 16 Oct 2025 09:07:58 GMT
- Title: The Fluorescent Veil: A Stealthy and Effective Physical Adversarial Patch Against Traffic Sign Recognition
- Authors: Shuai Yuan, Xingshuo Han, Hongwei Li, Guowen Xu, Wenbo Jiang, Tao Ni, Qingchuan Zhao, Yuguang Fang,
- Abstract summary: Traffic sign recognition systems have become a prominent target for physical adversarial attacks.<n>We introduce a novel attack medium, i.e., fluorescent ink, to design a stealthy and effective physical adversarial patch, namely FIPatch.<n>Our attack successfully bypasses five popular defenses and achieves a success rate of 96.72%.
- Score: 35.11840718733266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, traffic sign recognition (TSR) systems have become a prominent target for physical adversarial attacks. These attacks typically rely on conspicuous stickers and projections, or using invisible light and acoustic signals that can be easily blocked. In this paper, we introduce a novel attack medium, i.e., fluorescent ink, to design a stealthy and effective physical adversarial patch, namely FIPatch, to advance the state-of-the-art. Specifically, we first model the fluorescence effect in the digital domain to identify the optimal attack settings, which guide the real-world fluorescence parameters. By applying a carefully designed fluorescence perturbation to the target sign, the attacker can later trigger a fluorescent effect using invisible ultraviolet light, causing the TSR system to misclassify the sign and potentially leading to traffic accidents. We conducted a comprehensive evaluation to investigate the effectiveness of FIPatch, which shows a success rate of 98.31% in low-light conditions. Furthermore, our attack successfully bypasses five popular defenses and achieves a success rate of 96.72%.
Related papers
- T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving [40.067678927952336]
Traffic Sign Recognition (TSR) systems play a critical role in Autonomous Driving (AD) systems.<n>Recent research has exposed their vulnerability to physical-world adversarial appearance attacks.<n>We present DiffSign, a novel T2I-based appearance attack framework.
arXiv Detail & Related papers (2025-11-17T04:29:55Z) - Trapped by Their Own Light: Deployable and Stealth Retroreflective Patch Attacks on Traffic Sign Recognition Systems [16.822818577387906]
We introduce the Adversarial Retroreflective Patch (ARP), a novel attack vector that combines the high deployability of patch attacks with the stealthiness of laser projections.<n>A user study demonstrates that ARP attacks maintain near-identical stealthiness to benign signs while achieving $geq$1.9% higher stealthiness scores than previous patch attacks.<n>We propose the DPR Shield defense, employing strategically placed polarized filters, which achieves $geq$75% defense success rates for stop signs and speed limit signs.
arXiv Detail & Related papers (2025-11-13T07:48:30Z) - Discovering New Shadow Patterns for Black-Box Attacks on Lane Detection of Autonomous Vehicles [2.360491397983081]
We present a novel physical-world attack on autonomous vehicle lane detection systems.<n>Our method is entirely passive, power-free, and inconspicuous to human observers.<n>A user study confirms the attack's stealthiness, with 83.6% of participants failing to detect it during driving tasks.
arXiv Detail & Related papers (2024-09-26T19:43:52Z) - Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning [57.50274256088251]
We show that parameter-efficient fine-tuning (PEFT) is more susceptible to weight-poisoning backdoor attacks.
We develop a Poisoned Sample Identification Module (PSIM) leveraging PEFT, which identifies poisoned samples through confidence.
We conduct experiments on text classification tasks, five fine-tuning strategies, and three weight-poisoning backdoor attack methods.
arXiv Detail & Related papers (2024-02-19T14:22:54Z) - Invisible Reflections: Leveraging Infrared Laser Reflections to Target
Traffic Sign Perception [25.566091959509986]
Road signs indicate locally active rules, such as speed limits and requirements to yield or stop.
Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation.
We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors.
arXiv Detail & Related papers (2024-01-07T21:22:42Z) - Brightness-Restricted Adversarial Attack Patch [0.0]
We introduce a brightness-restricted patch (BrPatch) that uses optical characteristics to reduce conspicuousness while preserving image independence.
Our experiments show that attack patches exhibit strong redundancy to brightness and are resistant to color transfer and noise.
arXiv Detail & Related papers (2023-07-01T20:08:55Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - SLAP: Improving Physical Adversarial Examples with Short-Lived
Adversarial Perturbations [19.14079118174123]
Short-Lived Adrial Perturbations (SLAP) is a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector.
SLAP allows the adversary greater control over the attack compared to adversarial patches.
We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks.
arXiv Detail & Related papers (2020-07-08T14:11:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.