Fall Leaf Adversarial Attack on Traffic Sign Classification
- URL: http://arxiv.org/abs/2411.18776v1
- Date: Wed, 27 Nov 2024 22:02:38 GMT
- Title: Fall Leaf Adversarial Attack on Traffic Sign Classification
- Authors: Anthony Etim, Jakub Szefer,
- Abstract summary: Adrial input image perturbation attacks have emerged as a significant threat to machine learning algorithms.
This work presents a new class of adversarial attacks that leverage nature-made artifacts.
A fall leaf stuck to a street sign could come from a near-by tree, rather than be placed there by a malicious human attacker.
- Score: 10.353892677735212
- License:
- Abstract: Adversarial input image perturbation attacks have emerged as a significant threat to machine learning algorithms, particularly in image classification setting. These attacks involve subtle perturbations to input images that cause neural networks to misclassify the input images, even though the images remain easily recognizable to humans. One critical area where adversarial attacks have been demonstrated is in automotive systems where traffic sign classification and recognition is critical, and where misclassified images can cause autonomous systems to take wrong actions. This work presents a new class of adversarial attacks. Unlike existing work that has focused on adversarial perturbations that leverage human-made artifacts to cause the perturbations, such as adding stickers, paint, or shining flashlights at traffic signs, this work leverages nature-made artifacts: tree leaves. By leveraging nature-made artifacts, the new class of attacks has plausible deniability: a fall leaf stuck to a street sign could come from a near-by tree, rather than be placed there by an malicious human attacker. To evaluate the new class of the adversarial input image perturbation attacks, this work analyses how fall leaves can cause misclassification in street signs. The work evaluates various leaves from different species of trees, and considers various parameters such as size, color due to tree leaf type, and rotation. The work demonstrates high success rate for misclassification. The work also explores the correlation between successful attacks and how they affect the edge detection, which is critical in many image classification algorithms.
Related papers
- Time Traveling to Defend Against Adversarial Example Attacks in Image Classification [10.353892677735212]
Adversarial example attacks have emerged as a critical threat to machine learning.
Adversarial attacks in image classification abuse various, minor modifications to the image that confuse the image classification neural network.
This work introduces the notion of ''time traveling'' and uses historical Street View images accessible to anybody to perform inference on different, past versions of the same traffic sign.
arXiv Detail & Related papers (2024-10-10T19:56:28Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Adversarial Attacks on Image Classification Models: FGSM and Patch
Attacks and their Impact [0.0]
This chapter introduces the concept of adversarial attacks on image classification models built on convolutional neural networks (CNN)
CNNs are very popular deep-learning models which are used in image classification tasks.
Two very well-known adversarial attacks are discussed and their impact on the performance of image classifiers is analyzed.
arXiv Detail & Related papers (2023-07-05T06:40:08Z) - Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning [0.0]
Deep neural networks are capable of state-of-the-art performance in many classification tasks.
Deep neural networks are known to be vulnerable to adversarial attacks.
We develop a new class of attack algorithms that use componentwise relative perturbations.
arXiv Detail & Related papers (2023-06-05T14:28:39Z) - Evaluating Adversarial Robustness on Document Image Classification [0.0]
We try to apply the adversarial attack philosophy on documentary and natural data and to protect models against such attacks.
We focus our work on untargeted gradient-based, transfer-based and score-based attacks and evaluate the impact of adversarial training.
arXiv Detail & Related papers (2023-04-24T22:57:59Z) - Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation [48.238349062995916]
We find that highly effective backdoors can be easily inserted using rotation-based image transformation.
Our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks.
arXiv Detail & Related papers (2022-07-22T00:21:18Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.