Generalized Attacks on Face Verification Systems
- URL: http://arxiv.org/abs/2309.05879v1
- Date: Tue, 12 Sep 2023 00:00:24 GMT
- Title: Generalized Attacks on Face Verification Systems
- Authors: Ehsan Nazari, Paula Branco, Guy-Vincent Jourdan
- Abstract summary: Face verification (FV) using deep neural network models has made tremendous progress in recent years.
FV systems are vulnerable to Adversarial Attacks, which manipulate input images to deceive these systems in ways usually unnoticeable to humans.
We introduce the DodgePersonation Attack that formulates the creation of face images that impersonate a set of given identities.
- Score: 2.4259557752446637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face verification (FV) using deep neural network models has made tremendous
progress in recent years, surpassing human accuracy and seeing deployment in
various applications such as border control and smartphone unlocking. However,
FV systems are vulnerable to Adversarial Attacks, which manipulate input images
to deceive these systems in ways usually unnoticeable to humans. This paper
provides an in-depth study of attacks on FV systems. We introduce the
DodgePersonation Attack that formulates the creation of face images that
impersonate a set of given identities while avoiding being identified as any of
the identities in a separate, disjoint set. A taxonomy is proposed to provide a
unified view of different types of Adversarial Attacks against FV systems,
including Dodging Attacks, Impersonation Attacks, and Master Face Attacks.
Finally, we propose the ''One Face to Rule Them All'' Attack which implements
the DodgePersonation Attack with state-of-the-art performance on a well-known
scenario (Master Face Attack) and which can also be used for the new scenarios
introduced in this paper. While the state-of-the-art Master Face Attack can
produce a set of 9 images to cover 43.82% of the identities in their test
database, with 9 images our attack can cover 57.27% to 58.5% of these
identifies while giving the attacker the choice of the identity to use to
create the impersonation. Moreover, the 9 generated attack images appear
identical to a casual observer.
Related papers
- AdvGen: Physical Adversarial Attack on Face Presentation Attack
Detection Systems [17.03646903905082]
Adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system.
This paper demonstrates the vulnerability of face authentication systems to adversarial images in physical world scenarios.
We propose AdvGen, an automated Generative Adversarial Network, to simulate print and replay attacks and generate adversarial images that can fool state-of-the-art PADs.
arXiv Detail & Related papers (2023-11-20T13:28:42Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Invisible Backdoor Attack with Dynamic Triggers against Person
Re-identification [71.80885227961015]
Person Re-identification (ReID) has rapidly progressed with wide real-world applications, but also poses significant risks of adversarial attacks.
We propose a novel backdoor attack on ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA)
We extensively validate the effectiveness and stealthiness of the proposed attack on benchmark datasets, and evaluate the effectiveness of several defense methods against our attack.
arXiv Detail & Related papers (2022-11-20T10:08:28Z) - Is Face Recognition Safe from Realizable Attacks? [1.7214499647717132]
Face recognition is a popular form of biometric authentication and due to its widespread use, attacks have become more common as well.
Recent studies show that Face Recognition Systems are vulnerable to attacks and can lead to erroneous identification of faces.
We propose an attack scheme where the attacker can generate realistic synthesized face images with subtle perturbations and physically realize that onto his face to attack black-box face recognition systems.
arXiv Detail & Related papers (2022-10-15T03:52:53Z) - RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer [10.245536402327096]
We propose a new method to attack face recognition models or systems called RSTAM.
RSTAM enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer.
The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft.
arXiv Detail & Related papers (2022-06-25T08:16:55Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - An Ensemble Model for Face Liveness Detection [2.322052136673525]
We present a passive method to detect face presentation attack using an ensemble deep learning technique.
We propose an ensemble method where multiple features of the face and background regions are learned to predict whether the user is a bonafide or an attacker.
arXiv Detail & Related papers (2022-01-19T12:43:39Z) - Vulnerability Analysis of Face Morphing Attacks from Landmarks and
Generative Adversarial Networks [0.8602553195689513]
This paper provides a new dataset with four different types of morphing attacks based on OpenCV, FaceMorpher, WebMorph and a generative adversarial network (StyleGAN)
We also conduct extensive experiments to assess the vulnerability of the state-of-the-art face recognition systems, notably FaceNet, VGG-Face, and ArcFace.
arXiv Detail & Related papers (2020-12-09T22:10:17Z) - FaceGuard: A Self-Supervised Defense Against Adversarial Face Images [59.656264895721215]
We propose a new self-supervised adversarial defense framework, namely FaceGuard, that can automatically detect, localize, and purify a wide variety of adversarial faces.
During training, FaceGuard automatically synthesizes challenging and diverse adversarial attacks, enabling a classifier to learn to distinguish them from real faces.
Experimental results on LFW dataset show that FaceGuard can achieve 99.81% detection accuracy on six unseen adversarial attack types.
arXiv Detail & Related papers (2020-11-28T21:18:46Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.