T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving
- URL: http://arxiv.org/abs/2511.12956v1
- Date: Mon, 17 Nov 2025 04:29:55 GMT
- Title: T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving
- Authors: Chen Ma, Ningfei Wang, Junhao Zheng, Qing Guo, Qian Wang, Qi Alfred Chen, Chao Shen,
- Abstract summary: Traffic Sign Recognition (TSR) systems play a critical role in Autonomous Driving (AD) systems.<n>Recent research has exposed their vulnerability to physical-world adversarial appearance attacks.<n>We present DiffSign, a novel T2I-based appearance attack framework.
- Score: 40.067678927952336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic Sign Recognition (TSR) systems play a critical role in Autonomous Driving (AD) systems, enabling real-time detection of road signs, such as STOP and speed limit signs. While these systems are increasingly integrated into commercial vehicles, recent research has exposed their vulnerability to physical-world adversarial appearance attacks. In such attacks, carefully crafted visual patterns are misinterpreted by TSR models as legitimate traffic signs, while remaining inconspicuous or benign to human observers. However, existing adversarial appearance attacks suffer from notable limitations. Pixel-level perturbation-based methods often lack stealthiness and tend to overfit to specific surrogate models, resulting in poor transferability to real-world TSR systems. On the other hand, text-to-image (T2I) diffusion model-based approaches demonstrate limited effectiveness and poor generalization to out-of-distribution sign types. In this paper, we present DiffSign, a novel T2I-based appearance attack framework designed to generate physically robust, highly effective, transferable, practical, and stealthy appearance attacks against TSR systems. To overcome the limitations of prior approaches, we propose a carefully designed attack pipeline that integrates CLIP-based loss and masked prompts to improve attack focus and controllability. We also propose two novel style customization methods to guide visual appearance and improve out-of-domain traffic sign attack generalization and attack stealthiness. We conduct extensive evaluations of DiffSign under varied real-world conditions, including different distances, angles, light conditions, and sign categories. Our method achieves an average physical-world attack success rate of 83.3%, leveraging DiffSign's high effectiveness in attack transferability.
Related papers
- The Outline of Deception: Physical Adversarial Attacks on Traffic Signs Using Edge Patches [6.836569632189732]
This study proposes TESP-Attack, a novel stealth-aware adversarial patch method for traffic sign classification.<n>Based on the observation that human visual attention primarily focuses on the central regions of traffic signs, we employ instance segmentation to generate edge-aligned masks.<n>A U-Net generator is utilized to craft adversarial patches, which are then optimized through color and texture constraints.
arXiv Detail & Related papers (2025-11-30T07:26:07Z) - LoRA as a Flexible Framework for Securing Large Vision Systems [1.9035583634286277]
Adversarial attacks have emerged as a critical threat to autonomous driving systems.<n>We propose to take insights for parameter efficient fine-tuning and use low-rank adaptation (LoRA) to train a lightweight security patch.<n>We demonstrate that our framework can patch a pre-trained model to improve classification accuracy by up to 78.01% in the presence of adversarial examples.
arXiv Detail & Related papers (2025-05-31T18:16:21Z) - Explainable Machine Learning for Cyberattack Identification from Traffic Flows [5.834276858232939]
We simulate cyberattacks in a semi-realistic environment, using a traffic network to analyze disruption patterns.<n>We develop a deep learning-based anomaly detection system, demonstrating that Longest Stop Duration and Total Jam Distance are key indicators of compromised signals.<n>This work enhances AI-driven traffic security, improving both detection accuracy and trustworthiness in smart transportation systems.
arXiv Detail & Related papers (2025-05-02T17:34:14Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.<n>We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.<n>We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - RED: Robust Environmental Design [0.0]
We propose an attacker-agnostic learning scheme to automatically design road signs that are robust to a wide array of patch-based attacks.
Empirical tests conducted in both digital and physical environments demonstrate that our approach significantly reduces vulnerability to patch attacks, outperforming existing techniques.
arXiv Detail & Related papers (2024-11-26T01:38:51Z) - Secure Traffic Sign Recognition: An Attention-Enabled Universal Image Inpainting Mechanism against Light Patch Attacks [15.915892134535842]
Researchers recently identified a new attack vector to deceive sign recognition systems: projecting well-designed adversarial light patches onto traffic signs.
To effectively counter this security threat, we propose a universal image inpainting mechanism, namely, SafeSign.
It relies on attention-enabled multi-view image fusion to repair traffic signs contaminated by adversarial light patches.
arXiv Detail & Related papers (2024-09-06T08:58:21Z) - Redesigning Traffic Signs to Mitigate Machine-Learning Patch Attacks [4.575921073944177]
Traffic-Sign Recognition (TSR) is a critical safety component for autonomous driving.<n>This work offers a novel approach that redefines traffic-sign designs to create signs that promote robustness while remaining interpretable to humans.
arXiv Detail & Related papers (2024-02-07T08:49:33Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.