Adversarial Neon Beam: A Light-based Physical Attack to DNNs
- URL: http://arxiv.org/abs/2204.00853v3
- Date: Tue, 23 May 2023 07:42:50 GMT
- Title: Adversarial Neon Beam: A Light-based Physical Attack to DNNs
- Authors: Chengyin Hu, Weiwen Shi, Wen Li
- Abstract summary: In this study, we introduce a novel light-based attack called the adversarial neon beam (AdvNB)
Our approach is evaluated on three key criteria: effectiveness, stealthiness, and robustness.
By using common neon beams as perturbations, we enhance the stealthiness of the proposed attack, enabling physical samples to appear more natural.
- Score: 17.555617901536404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the physical world, deep neural networks (DNNs) are impacted by light and
shadow, which can have a significant effect on their performance. While
stickers have traditionally been used as perturbations in most physical
attacks, their perturbations can often be easily detected. To address this,
some studies have explored the use of light-based perturbations, such as lasers
or projectors, to generate more subtle perturbations, which are artificial
rather than natural. In this study, we introduce a novel light-based attack
called the adversarial neon beam (AdvNB), which utilizes common neon beams to
create a natural black-box physical attack. Our approach is evaluated on three
key criteria: effectiveness, stealthiness, and robustness. Quantitative results
obtained in simulated environments demonstrate the effectiveness of the
proposed method, and in physical scenarios, we achieve an attack success rate
of 81.82%, surpassing the baseline. By using common neon beams as
perturbations, we enhance the stealthiness of the proposed attack, enabling
physical samples to appear more natural. Moreover, we validate the robustness
of our approach by successfully attacking advanced DNNs with a success rate of
over 75% in all cases. We also discuss defense strategies against the AdvNB
attack and put forward other light-based physical attacks.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Towards Lightweight Black-Box Attacks against Deep Neural Networks [70.9865892636123]
We argue that black-box attacks can pose practical attacks where only several test samples are available.
As only a few samples are required, we refer to these attacks as lightweight black-box attacks.
We propose Error TransFormer (ETF) for lightweight attacks to mitigate the approximation error.
arXiv Detail & Related papers (2022-09-29T14:43:03Z) - Adversarial Catoptric Light: An Effective, Stealthy and Robust
Physical-World Attack to DNNs [0.0]
In this study, we introduce a novel physical attack, adversarial catoptric light (AdvCL), where adversarial perturbations are generated using a common natural phenomenon, catoptric light.
We evaluate the proposed method in three aspects: effectiveness, stealthiness, and robustness.
We achieve an attack success rate of 83.5%, surpassing the baseline.
arXiv Detail & Related papers (2022-09-19T12:33:46Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - KATANA: Simple Post-Training Robustness Using Test Time Augmentations [49.28906786793494]
A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks.
We propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights.
Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification.
arXiv Detail & Related papers (2021-09-16T19:16:00Z) - Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a
Blink [15.54571899946818]
We show by simply using a laser beam that deep neural networks (DNNs) are easily fooled.
We propose a novel attack method called Adversarial Laser Beam ($AdvLB$), which enables manipulation of laser beam's physical parameters to perform adversarial attack.
arXiv Detail & Related papers (2021-03-11T07:03:21Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.