Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon
- URL: http://arxiv.org/abs/2203.03818v2
- Date: Wed, 9 Mar 2022 12:06:47 GMT
- Title: Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon
- Authors: Yiqi Zhong, Xianming Liu, Deming Zhai, Junjun Jiang, Xiangyang Ji
- Abstract summary: We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
- Score: 79.33449311057088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating the risk level of adversarial examples is essential for safely
deploying machine learning models in the real world. One popular approach for
physical-world attacks is to adopt the "sticker-pasting" strategy, which
however suffers from some limitations, including difficulties in access to the
target or printing by valid colors. A new type of non-invasive attacks emerged
recently, which attempt to cast perturbation onto the target by optics based
tools, such as laser beam and projector. However, the added optical patterns
are artificial but not natural. Thus, they are still conspicuous and
attention-grabbed, and can be easily noticed by humans. In this paper, we study
a new type of optical adversarial examples, in which the perturbations are
generated by a very common natural phenomenon, shadow, to achieve naturalistic
and stealthy physical-world adversarial attack under the black-box setting. We
extensively evaluate the effectiveness of this new attack on both simulated and
real-world environments. Experimental results on traffic sign recognition
demonstrate that our algorithm can generate adversarial examples effectively,
reaching 98.23% and 90.47% success rates on LISA and GTSRB test sets
respectively, while continuously misleading a moving camera over 95% of the
time in real-world scenarios. We also offer discussions about the limitations
and the defense mechanism of this attack.
Related papers
- Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
Optical Perturbations [17.761200546223442]
Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems.
We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples.
arXiv Detail & Related papers (2023-07-24T21:16:38Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Adversarial Catoptric Light: An Effective, Stealthy and Robust
Physical-World Attack to DNNs [0.0]
In this study, we introduce a novel physical attack, adversarial catoptric light (AdvCL), where adversarial perturbations are generated using a common natural phenomenon, catoptric light.
We evaluate the proposed method in three aspects: effectiveness, stealthiness, and robustness.
We achieve an attack success rate of 83.5%, surpassing the baseline.
arXiv Detail & Related papers (2022-09-19T12:33:46Z) - Adversarial Color Projection: A Projector-based Physical Attack to DNNs [3.9477796725601872]
We propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP)
We achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100%.
When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate.
arXiv Detail & Related papers (2022-09-19T12:27:32Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations
with Perceptual Similarity [5.03315505352304]
Adversarial examples are malicious images with visually imperceptible perturbations.
We propose Demiguise Attack, crafting unrestricted'' perturbations with Perceptual Similarity.
We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility.
arXiv Detail & Related papers (2021-07-03T10:14:01Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z) - SLAP: Improving Physical Adversarial Examples with Short-Lived
Adversarial Perturbations [19.14079118174123]
Short-Lived Adrial Perturbations (SLAP) is a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector.
SLAP allows the adversary greater control over the attack compared to adversarial patches.
We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks.
arXiv Detail & Related papers (2020-07-08T14:11:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.