Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a
Blink
- URL: http://arxiv.org/abs/2103.06504v1
- Date: Thu, 11 Mar 2021 07:03:21 GMT
- Title: Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a
Blink
- Authors: Ranjie Duan, Xiaofeng Mao, A. K. Qin, Yun Yang, Yuefeng Chen, Shaokai
Ye, Yuan He
- Abstract summary: We show by simply using a laser beam that deep neural networks (DNNs) are easily fooled.
We propose a novel attack method called Adversarial Laser Beam ($AdvLB$), which enables manipulation of laser beam's physical parameters to perform adversarial attack.
- Score: 15.54571899946818
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Though it is well known that the performance of deep neural networks (DNNs)
degrades under certain light conditions, there exists no study on the threats
of light beams emitted from some physical source as adversarial attacker on
DNNs in a real-world scenario. In this work, we show by simply using a laser
beam that DNNs are easily fooled. To this end, we propose a novel attack method
called Adversarial Laser Beam ($AdvLB$), which enables manipulation of laser
beam's physical parameters to perform adversarial attack. Experiments
demonstrate the effectiveness of our proposed approach in both digital- and
physical-settings. We further empirically analyze the evaluation results and
reveal that the proposed laser beam attack may lead to some interesting
prediction errors of the state-of-the-art DNNs. We envisage that the proposed
$AdvLB$ method enriches the current family of adversarial attacks and builds
the foundation for future robustness studies for light.
Related papers
- Adversarial Catoptric Light: An Effective, Stealthy and Robust
Physical-World Attack to DNNs [0.0]
In this study, we introduce a novel physical attack, adversarial catoptric light (AdvCL), where adversarial perturbations are generated using a common natural phenomenon, catoptric light.
We evaluate the proposed method in three aspects: effectiveness, stealthiness, and robustness.
We achieve an attack success rate of 83.5%, surpassing the baseline.
arXiv Detail & Related papers (2022-09-19T12:33:46Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs [0.0]
In this paper, we demonstrate a novel physical adversarial attack technique called Adrial Zoom Lens (AdvZL)
AdvZL uses a zoom lens to zoom in and out of pictures of the physical world, fooling DNNs without changing the characteristics of the target object.
In a digital environment, we construct a data set based on AdvZL to verify the antagonism of equal-scale enlarged images to DNNs.
In the physical environment, we manipulate the zoom lens to zoom in and out of the target object, and generate adversarial samples.
arXiv Detail & Related papers (2022-06-23T13:03:08Z) - Adversarial Laser Spot: Robust and Covert Physical-World Attack to DNNs [15.620269826381437]
We propose a light-based physical attack, called adversarial laser spot (AdvLS)
It optimize the physical parameters of laser spots through genetic algorithm to perform physical attacks.
It is the first light-based physical attack that perform physical attacks in the daytime.
arXiv Detail & Related papers (2022-06-02T13:15:08Z) - Adversarial Neon Beam: A Light-based Physical Attack to DNNs [17.555617901536404]
In this study, we introduce a novel light-based attack called the adversarial neon beam (AdvNB)
Our approach is evaluated on three key criteria: effectiveness, stealthiness, and robustness.
By using common neon beams as perturbations, we enhance the stealthiness of the proposed attack, enabling physical samples to appear more natural.
arXiv Detail & Related papers (2022-04-02T12:57:00Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z) - Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading [75.73437831338907]
Diabetic Retinopathy (DR) is a leading cause of vision loss around the world.
To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs)
RFIs are commonly affected by camera exposure issues that may lead to incorrect grades.
In this paper, we study this problem from the viewpoint of adversarial attacks.
arXiv Detail & Related papers (2020-09-19T13:47:33Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.