Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
Optical Perturbations
- URL: http://arxiv.org/abs/2307.13131v2
- Date: Thu, 27 Jul 2023 21:58:03 GMT
- Title: Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
Optical Perturbations
- Authors: Yi Han, Matthew Chan, Eric Wengrowski, Zhuohuan Li, Nils Ole
Tippenhauer, Mani Srivastava, Saman Zonouz, Luis Garcia
- Abstract summary: Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems.
We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples.
- Score: 17.761200546223442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Camera-based autonomous systems that emulate human perception are
increasingly being integrated into safety-critical platforms. Consequently, an
established body of literature has emerged that explores adversarial attacks
targeting the underlying machine learning models. Adapting adversarial attacks
to the physical world is desirable for the attacker, as this removes the need
to compromise digital systems. However, the real world poses challenges related
to the "survivability" of adversarial manipulations given environmental noise
in perception pipelines and the dynamicity of autonomous systems. In this
paper, we take a sensor-first approach. We present EvilEye, a man-in-the-middle
perception attack that leverages transparent displays to generate dynamic
physical adversarial examples. EvilEye exploits the camera's optics to induce
misclassifications under a variety of illumination conditions. To generate
dynamic perturbations, we formalize the projection of a digital attack into the
physical domain by modeling the transformation function of the captured image
through the optical pipeline. Our extensive experiments show that EvilEye's
generated adversarial perturbations are much more robust across varying
environmental light conditions relative to existing physical perturbation
frameworks, achieving a high attack success rate (ASR) while bypassing
state-of-the-art physical adversarial detection frameworks. We demonstrate that
the dynamic nature of EvilEye enables attackers to adapt adversarial examples
across a variety of objects with a significantly higher ASR compared to
state-of-the-art physical world attack frameworks. Finally, we discuss
mitigation strategies against the EvilEye attack.
Related papers
- Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - AdvGen: Physical Adversarial Attack on Face Presentation Attack
Detection Systems [17.03646903905082]
Adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system.
This paper demonstrates the vulnerability of face authentication systems to adversarial images in physical world scenarios.
We propose AdvGen, an automated Generative Adversarial Network, to simulate print and replay attacks and generate adversarial images that can fool state-of-the-art PADs.
arXiv Detail & Related papers (2023-11-20T13:28:42Z) - State-of-the-art optical-based physical adversarial attacks for deep
learning computer vision systems [3.3470481105928216]
Adversarial attacks can mislead deep learning models to make false predictions by implanting small perturbations to the original input that are imperceptible to the human eye.
Physical adversarial attacks, which is more realistic, as the perturbation is introduced to the input before it is being captured and converted to a binary image.
This paper focuses on optical-based physical adversarial attack techniques for computer vision systems.
arXiv Detail & Related papers (2023-03-22T01:14:52Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - A Survey on Physical Adversarial Attack in Computer Vision [7.053905447737444]
Deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial examples crafted by malicious tiny noise.
With the increasing deployment of the DNN-based system in the real world, strengthening the robustness of these systems is an emergency.
arXiv Detail & Related papers (2022-09-28T17:23:52Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.