Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs
- URL: http://arxiv.org/abs/2206.12251v2
- Date: Tue, 23 May 2023 15:41:03 GMT
- Title: Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs
- Authors: Chengyin Hu, Weiwen Shi
- Abstract summary: In this paper, we demonstrate a novel physical adversarial attack technique called Adrial Zoom Lens (AdvZL)
AdvZL uses a zoom lens to zoom in and out of pictures of the physical world, fooling DNNs without changing the characteristics of the target object.
In a digital environment, we construct a data set based on AdvZL to verify the antagonism of equal-scale enlarged images to DNNs.
In the physical environment, we manipulate the zoom lens to zoom in and out of the target object, and generate adversarial samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although deep neural networks (DNNs) are known to be fragile, no one has
studied the effects of zooming-in and zooming-out of images in the physical
world on DNNs performance. In this paper, we demonstrate a novel physical
adversarial attack technique called Adversarial Zoom Lens (AdvZL), which uses a
zoom lens to zoom in and out of pictures of the physical world, fooling DNNs
without changing the characteristics of the target object. The proposed method
is so far the only adversarial attack technique that does not add physical
adversarial perturbation attack DNNs. In a digital environment, we construct a
data set based on AdvZL to verify the antagonism of equal-scale enlarged images
to DNNs. In the physical environment, we manipulate the zoom lens to zoom in
and out of the target object, and generate adversarial samples. The
experimental results demonstrate the effectiveness of AdvZL in both digital and
physical environments. We further analyze the antagonism of the proposed data
set to the improved DNNs. On the other hand, we provide a guideline for defense
against AdvZL by means of adversarial training. Finally, we look into the
threat possibilities of the proposed approach to future autonomous driving and
variant attack ideas similar to the proposed attack.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - Adversarial alignment: Breaking the trade-off between the strength of an
attack and its relevance to human perception [10.883174135300418]
Adversarial attacks have long been considered the "Achilles' heel" of deep learning.
Here, we investigate how the robustness of DNNs to adversarial attacks has evolved as their accuracy on ImageNet has continued to improve.
arXiv Detail & Related papers (2023-06-05T20:26:17Z) - Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural
Networks with Neuromorphic Data [15.084703823643311]
spiking neural networks (SNNs) offer enhanced energy efficiency and biologically plausible data processing capabilities.
This paper delves into backdoor attacks in SNNs using neuromorphic datasets and diverse triggers.
We present various attack strategies, achieving an attack success rate of up to 100% while maintaining a negligible impact on clean accuracy.
arXiv Detail & Related papers (2023-02-13T11:34:17Z) - Towards Understanding and Boosting Adversarial Transferability from a
Distribution Perspective [80.02256726279451]
adversarial attacks against Deep neural networks (DNNs) have received broad attention in recent years.
We propose a novel method that crafts adversarial examples by manipulating the distribution of the image.
Our method can significantly improve the transferability of the crafted attacks and achieves state-of-the-art performance in both untargeted and targeted scenarios.
arXiv Detail & Related papers (2022-10-09T09:58:51Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Bio-Inspired Adversarial Attack Against Deep Neural Networks [28.16483200512112]
The paper develops a new adversarial attack against deep neural networks (DNN) based on applying bio-inspired design to moving objects.
To the best of our knowledge, this is the first work to introduce physical attacks with a moving object.
arXiv Detail & Related papers (2021-06-30T03:23:52Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z) - Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading [75.73437831338907]
Diabetic Retinopathy (DR) is a leading cause of vision loss around the world.
To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs)
RFIs are commonly affected by camera exposure issues that may lead to incorrect grades.
In this paper, we study this problem from the viewpoint of adversarial attacks.
arXiv Detail & Related papers (2020-09-19T13:47:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.