On the (Limited) Generalization of MasterFace Attacks and Its Relation
to the Capacity of Face Representations
- URL: http://arxiv.org/abs/2203.12387v1
- Date: Wed, 23 Mar 2022 13:02:41 GMT
- Title: On the (Limited) Generalization of MasterFace Attacks and Its Relation
to the Capacity of Face Representations
- Authors: Philipp Terh\"orst, Florian Bierbaum, Marco Huber, Naser Damer,
Florian Kirchbuchner, Kiran Raja, Arjan Kuijper
- Abstract summary: We study the generalizability of MasterFace attacks in empirical and theoretical investigations.
We estimate the face capacity and the maximum MasterFace coverage under the assumption that identities in the face space are well separated.
We conclude that MasterFaces should not be seen as a threat to face recognition systems but as a tool to enhance the robustness of face recognition models.
- Score: 11.924504853735645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A MasterFace is a face image that can successfully match against a large
portion of the population. Since their generation does not require access to
the information of the enrolled subjects, MasterFace attacks represent a
potential security risk for widely-used face recognition systems. Previous
works proposed methods for generating such images and demonstrated that these
attacks can strongly compromise face recognition. However, previous works
followed evaluation settings consisting of older recognition models, limited
cross-dataset and cross-model evaluations, and the use of low-scale testing
data. This makes it hard to state the generalizability of these attacks. In
this work, we comprehensively analyse the generalizability of MasterFace
attacks in empirical and theoretical investigations. The empirical
investigations include the use of six state-of-the-art FR models, cross-dataset
and cross-model evaluation protocols, and utilizing testing datasets of
significantly higher size and variance. The results indicate a low
generalizability when MasterFaces are training on a different face recognition
model than the one used for testing. In these cases, the attack performance is
similar to zero-effort imposter attacks. In the theoretical investigations, we
define and estimate the face capacity and the maximum MasterFace coverage under
the assumption that identities in the face space are well separated. The
current trend of increasing the fairness and generalizability in face
recognition indicates that the vulnerability of future systems might further
decrease. We conclude that MasterFaces should not be seen as a threat to face
recognition systems but, on the contrary, seen as a tool to understand and
enhance the robustness of face recognition models.
Related papers
- Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Real Face Foundation Representation Learning for Generalized Deepfake
Detection [74.4691295738097]
The emergence of deepfake technologies has become a matter of social concern as they pose threats to individual privacy and public security.
It is almost impossible to collect sufficient representative fake faces, and it is hard for existing detectors to generalize to all types of manipulation.
We propose Real Face Foundation Representation Learning (RFFR), which aims to learn a general representation from large-scale real face datasets.
arXiv Detail & Related papers (2023-03-15T08:27:56Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders [81.21440457805932]
We propose a novel framework FaceMAE, where the face privacy and recognition performance are considered simultaneously.
randomly masked face images are used to train the reconstruction module in FaceMAE.
We also perform sufficient privacy-preserving face recognition on several public face datasets.
arXiv Detail & Related papers (2022-05-23T07:19:42Z) - Master Face Attacks on Face Recognition Systems [45.090037010778765]
Face authentication is now widely used, especially on mobile devices, rather than authentication using a personal identification number or an unlock pattern.
Previous work has proven the existence of master faces that match multiple enrolled templates in face recognition systems.
In this paper, we perform an extensive study on latent variable evolution (LVE), a method commonly used to generate master faces.
arXiv Detail & Related papers (2021-09-08T02:11:35Z) - MagFace: A Universal Representation for Face Recognition and Quality
Assessment [6.7044749347155035]
This paper proposes MagFace, a category of losses that learn a universal feature embedding whose magnitude can measure the quality of the given face.
Under the new loss, it can be proven that the magnitude of the feature embedding monotonically increases if the subject is more likely to be recognized.
In addition, MagFace introduces an adaptive mechanism to learn a well within-class feature by pulling easy samples to class centers while pushing hard samples away.
arXiv Detail & Related papers (2021-03-11T11:58:21Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Generating Master Faces for Use in Performing Wolf Attacks on Face
Recognition Systems [40.59670229362299]
Face authentication has become increasingly mainstream and is now a prime target for attackers.
Previous research has shown that finger-vein- and fingerprint-based authentication methods are susceptible to wolf attacks.
We generated high-quality master faces by using the state-of-the-art face generator StyleGAN.
arXiv Detail & Related papers (2020-06-15T12:59:49Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.