Generating Master Faces for Use in Performing Wolf Attacks on Face
Recognition Systems
- URL: http://arxiv.org/abs/2006.08376v1
- Date: Mon, 15 Jun 2020 12:59:49 GMT
- Title: Generating Master Faces for Use in Performing Wolf Attacks on Face
Recognition Systems
- Authors: Huy H. Nguyen, Junichi Yamagishi, Isao Echizen, S\'ebastien Marcel
- Abstract summary: Face authentication has become increasingly mainstream and is now a prime target for attackers.
Previous research has shown that finger-vein- and fingerprint-based authentication methods are susceptible to wolf attacks.
We generated high-quality master faces by using the state-of-the-art face generator StyleGAN.
- Score: 40.59670229362299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to its convenience, biometric authentication, especial face
authentication, has become increasingly mainstream and thus is now a prime
target for attackers. Presentation attacks and face morphing are typical types
of attack. Previous research has shown that finger-vein- and fingerprint-based
authentication methods are susceptible to wolf attacks, in which a wolf sample
matches many enrolled user templates. In this work, we demonstrated that wolf
(generic) faces, which we call "master faces," can also compromise face
recognition systems and that the master face concept can be generalized in some
cases. Motivated by recent similar work in the fingerprint domain, we generated
high-quality master faces by using the state-of-the-art face generator StyleGAN
in a process called latent variable evolution. Experiments demonstrated that
even attackers with limited resources using only pre-trained models available
on the Internet can initiate master face attacks. The results, in addition to
demonstrating performance from the attacker's point of view, can also be used
to clarify and improve the performance of face recognition systems and harden
face authentication systems.
Related papers
- Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Digital and Physical Face Attacks: Reviewing and One Step Further [31.780516471483985]
Face presentation attacks (FPA) have raised pressing mistrust concerns.
Besides physical face attacks, face videos/images are vulnerable to a wide variety of digital attack techniques launched by malicious hackers.
This survey aims to build the integrity of face forensics by providing thorough analyses of existing literature and highlighting the issues requiring further attention.
arXiv Detail & Related papers (2022-09-29T11:25:52Z) - RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer [10.245536402327096]
We propose a new method to attack face recognition models or systems called RSTAM.
RSTAM enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer.
The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft.
arXiv Detail & Related papers (2022-06-25T08:16:55Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - On the (Limited) Generalization of MasterFace Attacks and Its Relation
to the Capacity of Face Representations [11.924504853735645]
We study the generalizability of MasterFace attacks in empirical and theoretical investigations.
We estimate the face capacity and the maximum MasterFace coverage under the assumption that identities in the face space are well separated.
We conclude that MasterFaces should not be seen as a threat to face recognition systems but as a tool to enhance the robustness of face recognition models.
arXiv Detail & Related papers (2022-03-23T13:02:41Z) - Master Face Attacks on Face Recognition Systems [45.090037010778765]
Face authentication is now widely used, especially on mobile devices, rather than authentication using a personal identification number or an unlock pattern.
Previous work has proven the existence of master faces that match multiple enrolled templates in face recognition systems.
In this paper, we perform an extensive study on latent variable evolution (LVE), a method commonly used to generate master faces.
arXiv Detail & Related papers (2021-09-08T02:11:35Z) - Generating Master Faces for Dictionary Attacks with a Network-Assisted
Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity-authentication for a large portion of the population.
We optimize these faces, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator.
arXiv Detail & Related papers (2021-08-01T12:55:23Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.