Adversarial Catoptric Light: An Effective, Stealthy and Robust
Physical-World Attack to DNNs
- URL: http://arxiv.org/abs/2209.11739v2
- Date: Tue, 23 May 2023 14:05:08 GMT
- Title: Adversarial Catoptric Light: An Effective, Stealthy and Robust
Physical-World Attack to DNNs
- Authors: Chengyin Hu, Weiwen Shi
- Abstract summary: In this study, we introduce a novel physical attack, adversarial catoptric light (AdvCL), where adversarial perturbations are generated using a common natural phenomenon, catoptric light.
We evaluate the proposed method in three aspects: effectiveness, stealthiness, and robustness.
We achieve an attack success rate of 83.5%, surpassing the baseline.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have demonstrated exceptional success across
various tasks, underscoring the need to evaluate the robustness of advanced
DNNs. However, traditional methods using stickers as physical perturbations to
deceive classifiers present challenges in achieving stealthiness and suffer
from printing loss. Recent advancements in physical attacks have utilized light
beams such as lasers and projectors to perform attacks, where the optical
patterns generated are artificial rather than natural. In this study, we
introduce a novel physical attack, adversarial catoptric light (AdvCL), where
adversarial perturbations are generated using a common natural phenomenon,
catoptric light, to achieve stealthy and naturalistic adversarial attacks
against advanced DNNs in a black-box setting. We evaluate the proposed method
in three aspects: effectiveness, stealthiness, and robustness. Quantitative
results obtained in simulated environments demonstrate the effectiveness of the
proposed method, and in physical scenarios, we achieve an attack success rate
of 83.5%, surpassing the baseline. We use common catoptric light as a
perturbation to enhance the stealthiness of the method and make physical
samples appear more natural. Robustness is validated by successfully attacking
advanced and robust DNNs with a success rate over 80% in all cases.
Additionally, we discuss defense strategy against AdvCL and put forward some
light-based physical attacks.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - Adversarial Neon Beam: A Light-based Physical Attack to DNNs [17.555617901536404]
In this study, we introduce a novel light-based attack called the adversarial neon beam (AdvNB)
Our approach is evaluated on three key criteria: effectiveness, stealthiness, and robustness.
By using common neon beams as perturbations, we enhance the stealthiness of the proposed attack, enabling physical samples to appear more natural.
arXiv Detail & Related papers (2022-04-02T12:57:00Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - KATANA: Simple Post-Training Robustness Using Test Time Augmentations [49.28906786793494]
A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks.
We propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights.
Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification.
arXiv Detail & Related papers (2021-09-16T19:16:00Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a
Blink [15.54571899946818]
We show by simply using a laser beam that deep neural networks (DNNs) are easily fooled.
We propose a novel attack method called Adversarial Laser Beam ($AdvLB$), which enables manipulation of laser beam's physical parameters to perform adversarial attack.
arXiv Detail & Related papers (2021-03-11T07:03:21Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.