Secure Traffic Sign Recognition: An Attention-Enabled Universal Image Inpainting Mechanism against Light Patch Attacks
- URL: http://arxiv.org/abs/2409.04133v1
- Date: Fri, 6 Sep 2024 08:58:21 GMT
- Title: Secure Traffic Sign Recognition: An Attention-Enabled Universal Image Inpainting Mechanism against Light Patch Attacks
- Authors: Hangcheng Cao, Longzhi Yuan, Guowen Xu, Ziyang He, Zhengru Fang, Yuguang Fang,
- Abstract summary: Researchers recently identified a new attack vector to deceive sign recognition systems: projecting well-designed adversarial light patches onto traffic signs.
To effectively counter this security threat, we propose a universal image inpainting mechanism, namely, SafeSign.
It relies on attention-enabled multi-view image fusion to repair traffic signs contaminated by adversarial light patches.
- Score: 15.915892134535842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic sign recognition systems play a crucial role in assisting drivers to make informed decisions while driving. However, due to the heavy reliance on deep learning technologies, particularly for future connected and autonomous driving, these systems are susceptible to adversarial attacks that pose significant safety risks to both personal and public transportation. Notably, researchers recently identified a new attack vector to deceive sign recognition systems: projecting well-designed adversarial light patches onto traffic signs. In comparison with traditional adversarial stickers or graffiti, these emerging light patches exhibit heightened aggression due to their ease of implementation and outstanding stealthiness. To effectively counter this security threat, we propose a universal image inpainting mechanism, namely, SafeSign. It relies on attention-enabled multi-view image fusion to repair traffic signs contaminated by adversarial light patches, thereby ensuring the accurate sign recognition. Here, we initially explore the fundamental impact of malicious light patches on the local and global feature spaces of authentic traffic signs. Then, we design a binary mask-based U-Net image generation pipeline outputting diverse contaminated sign patterns, to provide our image inpainting model with needed training data. Following this, we develop an attention mechanism-enabled neural network to jointly utilize the complementary information from multi-view images to repair contaminated signs. Finally, extensive experiments are conducted to evaluate SafeSign's effectiveness in resisting potential light patch-based attacks, bringing an average accuracy improvement of 54.8% in three widely-used sign recognition models
Related papers
- Time Traveling to Defend Against Adversarial Example Attacks in Image Classification [10.353892677735212]
Adversarial example attacks have emerged as a critical threat to machine learning.
Adversarial attacks in image classification abuse various, minor modifications to the image that confuse the image classification neural network.
This work introduces the notion of ''time traveling'' and uses historical Street View images accessible to anybody to perform inference on different, past versions of the same traffic sign.
arXiv Detail & Related papers (2024-10-10T19:56:28Z) - A Hybrid Quantum-Classical AI-Based Detection Strategy for Generative Adversarial Network-Based Deepfake Attacks on an Autonomous Vehicle Traffic Sign Classification System [2.962613983209398]
Authors present how a generative adversarial network-based deepfake attack can be crafted to fool the AV traffic sign classification systems.
They develop a deepfake traffic sign image detection strategy leveraging hybrid quantum-classical neural networks (NNs)
The results indicate that the hybrid quantum-classical NNs for deepfake detection could achieve similar or higher performance than the baseline classical convolutional NNs in most cases.
arXiv Detail & Related papers (2024-09-25T19:44:56Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Adversarial Prompt Tuning for Vision-Language Models [86.5543597406173]
Adversarial Prompt Tuning (AdvPT) is a technique to enhance the adversarial robustness of image encoders in Vision-Language Models (VLMs)
We demonstrate that AdvPT improves resistance against white-box and black-box adversarial attacks and exhibits a synergistic effect when combined with existing image-processing-based defense techniques.
arXiv Detail & Related papers (2023-11-19T07:47:43Z) - Adversarial Attacks on Traffic Sign Recognition: A Survey [2.658812114255374]
Traffic signs are promising for adversarial attack research due to the ease of performing real-world attacks using printed signs or stickers.
We provide an overview of the latest advancements and highlight the existing research areas that require further investigation.
arXiv Detail & Related papers (2023-07-17T06:58:22Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Road images augmentation with synthetic traffic signs using neural
networks [3.330229314824913]
We consider the task of rare traffic sign detection and classification.
We aim to solve that problem by using synthetic training data.
We propose three methods for making synthetic signs consistent with a scene in appearance.
arXiv Detail & Related papers (2021-01-13T08:10:33Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.