The Outline of Deception: Physical Adversarial Attacks on Traffic Signs Using Edge Patches
- URL: http://arxiv.org/abs/2512.00765v2
- Date: Wed, 03 Dec 2025 02:55:46 GMT
- Title: The Outline of Deception: Physical Adversarial Attacks on Traffic Signs Using Edge Patches
- Authors: Haojie Ji, Te Hu, Haowen Li, Long Jin, Chongshi Xin, Yuchi Yao, Jiarui Xiao,
- Abstract summary: This study proposes TESP-Attack, a novel stealth-aware adversarial patch method for traffic sign classification.<n>Based on the observation that human visual attention primarily focuses on the central regions of traffic signs, we employ instance segmentation to generate edge-aligned masks.<n>A U-Net generator is utilized to craft adversarial patches, which are then optimized through color and texture constraints.
- Score: 6.836569632189732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent driving systems are vulnerable to physical adversarial attacks on traffic signs. These attacks can cause misclassification, leading to erroneous driving decisions that compromise road safety. Moreover, within V2X networks, such misinterpretations can propagate, inducing cascading failures that disrupt overall traffic flow and system stability. However, a key limitation of current physical attacks is their lack of stealth. Most methods apply perturbations to central regions of the sign, resulting in visually salient patterns that are easily detectable by human observers, thereby limiting their real-world practicality. This study proposes TESP-Attack, a novel stealth-aware adversarial patch method for traffic sign classification. Based on the observation that human visual attention primarily focuses on the central regions of traffic signs, we employ instance segmentation to generate edge-aligned masks that conform to the shape characteristics of the signs. A U-Net generator is utilized to craft adversarial patches, which are then optimized through color and texture constraints along with frequency domain analysis to achieve seamless integration with the background environment, resulting in highly effective visual concealment. The proposed method demonstrates outstanding attack success rates across traffic sign classification models with varied architectures, achieving over 90% under limited query budgets. It also exhibits strong cross-model transferability and maintains robust real-world performance that remains stable under varying angles and distances.
Related papers
- T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving [40.067678927952336]
Traffic Sign Recognition (TSR) systems play a critical role in Autonomous Driving (AD) systems.<n>Recent research has exposed their vulnerability to physical-world adversarial appearance attacks.<n>We present DiffSign, a novel T2I-based appearance attack framework.
arXiv Detail & Related papers (2025-11-17T04:29:55Z) - Explainable Machine Learning for Cyberattack Identification from Traffic Flows [5.834276858232939]
We simulate cyberattacks in a semi-realistic environment, using a traffic network to analyze disruption patterns.<n>We develop a deep learning-based anomaly detection system, demonstrating that Longest Stop Duration and Total Jam Distance are key indicators of compromised signals.<n>This work enhances AI-driven traffic security, improving both detection accuracy and trustworthiness in smart transportation systems.
arXiv Detail & Related papers (2025-05-02T17:34:14Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.<n>We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.<n>We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - RED: Robust Environmental Design [0.0]
We propose an attacker-agnostic learning scheme to automatically design road signs that are robust to a wide array of patch-based attacks.
Empirical tests conducted in both digital and physical environments demonstrate that our approach significantly reduces vulnerability to patch attacks, outperforming existing techniques.
arXiv Detail & Related papers (2024-11-26T01:38:51Z) - Secure Traffic Sign Recognition: An Attention-Enabled Universal Image Inpainting Mechanism against Light Patch Attacks [15.915892134535842]
Researchers recently identified a new attack vector to deceive sign recognition systems: projecting well-designed adversarial light patches onto traffic signs.
To effectively counter this security threat, we propose a universal image inpainting mechanism, namely, SafeSign.
It relies on attention-enabled multi-view image fusion to repair traffic signs contaminated by adversarial light patches.
arXiv Detail & Related papers (2024-09-06T08:58:21Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous
Driving: An Inductive Logic Programming Approach [0.0]
We propose an ILP-based approach for stop sign detection in Autonomous Vehicles.
It is more robust against adversarial attacks, as it mimics human-like perception.
It is able to correctly identify all targeted stop signs, even in the presence of PR2 and ADvCam attacks.
arXiv Detail & Related papers (2023-08-30T09:05:52Z) - Certified Interpretability Robustness for Class Activation Mapping [77.58769591550225]
We present CORGI, short for Certifiably prOvable Robustness Guarantees for Interpretability mapping.
CORGI is an algorithm that takes in an input image and gives a certifiable lower bound for the robustness of its CAM interpretability map.
We show the effectiveness of CORGI via a case study on traffic sign data, certifying lower bounds on the minimum adversarial perturbation.
arXiv Detail & Related papers (2023-01-26T18:58:11Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.