Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution
- URL: http://arxiv.org/abs/2211.13964v2
- Date: Mon, 28 Nov 2022 06:07:08 GMT
- Title: Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution
- Authors: Tomer Friedlander, Ron Shmelkin, Lior Wolf
- Abstract summary: A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A master face is a face image that passes face-based identity authentication
for a high percentage of the population. These faces can be used to
impersonate, with a high probability of success, any user, without having
access to any user information. We optimize these faces for 2D and 3D face
verification models, by using an evolutionary algorithm in the latent embedding
space of the StyleGAN face generator. For 2D face verification, multiple
evolutionary strategies are compared, and we propose a novel approach that
employs a neural network to direct the search toward promising samples, without
adding fitness evaluations. The results we present demonstrate that it is
possible to obtain a considerable coverage of the identities in the LFW or RFW
datasets with less than 10 master faces, for six leading deep face recognition
systems. In 3D, we generate faces using the 2D StyleGAN2 generator and predict
a 3D structure using a deep 3D face reconstruction network. When employing two
different 3D face recognition systems, we are able to obtain a coverage of
40%-50%. Additionally, we present the generation of paired 2D RGB and 3D master
faces, which simultaneously match 2D and 3D models with high impersonation
rates.
Related papers
- ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - Controllable 3D Face Generation with Conditional Style Code Diffusion [51.24656496304069]
TEx-Face(TExt & Expression-to-Face) addresses challenges by dividing the task into three components, i.e., 3D GAN Inversion, Conditional Style Code Diffusion, and 3D Face Decoding.
Experiments conducted on FFHQ, CelebA-HQ, and CelebA-Dialog demonstrate the promising performance of our TEx-Face.
arXiv Detail & Related papers (2023-12-21T15:32:49Z) - Fake It Without Making It: Conditioned Face Generation for Accurate 3D
Face Reconstruction [5.079602839359523]
We present a method to generate a large-scale synthesised dataset of 250K photorealistic images and their corresponding shape parameters and depth maps, which we call SynthFace.
Our synthesis method conditions Stable Diffusion on depth maps sampled from the FLAME 3D Morphable Model (3DMM) of the human face, allowing us to generate a diverse set of shape-consistent facial images that is designed to be balanced in race and gender.
We propose ControlFace, a deep neural network, trained on SynthFace, which achieves competitive performance on the NoW benchmark, without requiring 3D supervision or manual 3D asset creation.
arXiv Detail & Related papers (2023-07-25T16:42:06Z) - LPFF: A Portrait Dataset for Face Generators Across Large Poses [38.03149794607065]
We present LPFF, a large-pose Flickr face dataset comprised of 19,590 high-quality real large-pose portrait images.
We utilize our dataset to train a 2D face generator that can process large-pose face images, as well as a 3D-aware generator that can generate realistic human face geometry.
arXiv Detail & Related papers (2023-03-25T09:07:36Z) - Generating Master Faces for Dictionary Attacks with a Network-Assisted
Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity-authentication for a large portion of the population.
We optimize these faces, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator.
arXiv Detail & Related papers (2021-08-01T12:55:23Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - Reconstructing A Large Scale 3D Face Dataset for Deep 3D Face
Identification [9.159921061636695]
We propose a framework of 2D-aided deep 3D face identification.
In particular, we propose to reconstruct millions of 3D face scans from a large scale 2D face database.
Our proposed approach achieves state-of-the-art rank-1 scores on the FRGC v2.0, Bosphorus, and BU-3DFE 3D face databases.
arXiv Detail & Related papers (2020-10-16T13:48:38Z) - Multi-channel Deep 3D Face Recognition [4.726009758066045]
The accuracy of 2D face recognition is still challenged by the change of pose, illumination, make-up, and expression.
We propose a multi-Channel deep 3D face network for face recognition based on 3D face data.
The face recognition accuracy of the multi-Channel deep 3D face network has achieved 98.6.
arXiv Detail & Related papers (2020-09-30T15:29:05Z) - Differential 3D Facial Recognition: Adding 3D to Your State-of-the-Art
2D Method [90.26041504667451]
We show that it is possible to adopt active illumination to enhance state-of-the-art 2D face recognition approaches with 3D features.
The proposed ideas can significantly boost face recognition performance and dramatically improve the robustness to spoofing attacks.
arXiv Detail & Related papers (2020-04-03T20:17:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.