AdvGen: Physical Adversarial Attack on Face Presentation Attack
Detection Systems
- URL: http://arxiv.org/abs/2311.11753v1
- Date: Mon, 20 Nov 2023 13:28:42 GMT
- Title: AdvGen: Physical Adversarial Attack on Face Presentation Attack
Detection Systems
- Authors: Sai Amrit Patnaik, Shivali Chansoriya, Anil K. Jain, Anoop M.
Namboodiri
- Abstract summary: Adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system.
This paper demonstrates the vulnerability of face authentication systems to adversarial images in physical world scenarios.
We propose AdvGen, an automated Generative Adversarial Network, to simulate print and replay attacks and generate adversarial images that can fool state-of-the-art PADs.
- Score: 17.03646903905082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the risk level of adversarial images is essential for safely
deploying face authentication models in the real world. Popular approaches for
physical-world attacks, such as print or replay attacks, suffer from some
limitations, like including physical and geometrical artifacts. Recently,
adversarial attacks have gained attraction, which try to digitally deceive the
learning strategy of a recognition system using slight modifications to the
captured image. While most previous research assumes that the adversarial image
could be digitally fed into the authentication systems, this is not always the
case for systems deployed in the real world. This paper demonstrates the
vulnerability of face authentication systems to adversarial images in physical
world scenarios. We propose AdvGen, an automated Generative Adversarial
Network, to simulate print and replay attacks and generate adversarial images
that can fool state-of-the-art PADs in a physical domain attack setting. Using
this attack strategy, the attack success rate goes up to 82.01%. We test AdvGen
extensively on four datasets and ten state-of-the-art PADs. We also demonstrate
the effectiveness of our attack by conducting experiments in a realistic,
physical environment.
Related papers
- Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Generalized Attacks on Face Verification Systems [2.4259557752446637]
Face verification (FV) using deep neural network models has made tremendous progress in recent years.
FV systems are vulnerable to Adversarial Attacks, which manipulate input images to deceive these systems in ways usually unnoticeable to humans.
We introduce the DodgePersonation Attack that formulates the creation of face images that impersonate a set of given identities.
arXiv Detail & Related papers (2023-09-12T00:00:24Z) - Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
Optical Perturbations [17.761200546223442]
Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems.
We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples.
arXiv Detail & Related papers (2023-07-24T21:16:38Z) - Evaluating Adversarial Robustness on Document Image Classification [0.0]
We try to apply the adversarial attack philosophy on documentary and natural data and to protect models against such attacks.
We focus our work on untargeted gradient-based, transfer-based and score-based attacks and evaluate the impact of adversarial training.
arXiv Detail & Related papers (2023-04-24T22:57:59Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Empirical Evaluation of Physical Adversarial Patch Attacks Against
Overhead Object Detection Models [2.2588953434934416]
Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models.
Recent work has demonstrated that these attacks can successfully transfer to the physical world.
We further test the efficacy of adversarial patch attacks in the physical world under more challenging conditions.
arXiv Detail & Related papers (2022-06-25T20:05:11Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Adversarial Light Projection Attacks on Face Recognition Systems: A
Feasibility Study [21.42041262836322]
We investigate the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections.
The adversary generates a digital adversarial pattern using one or more images of the target available to the adversary.
The digital adversarial pattern is then projected onto the adversary's face in the physical domain to either impersonate a target (impersonation) or evade recognition (obfuscation)
arXiv Detail & Related papers (2020-03-24T23:06:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.