Reconstructing A Large Scale 3D Face Dataset for Deep 3D Face
Identification
- URL: http://arxiv.org/abs/2010.08391v2
- Date: Sun, 12 Jun 2022 10:01:39 GMT
- Title: Reconstructing A Large Scale 3D Face Dataset for Deep 3D Face
Identification
- Authors: Cuican Yu, Zihui Zhang, Huibin Li
- Abstract summary: We propose a framework of 2D-aided deep 3D face identification.
In particular, we propose to reconstruct millions of 3D face scans from a large scale 2D face database.
Our proposed approach achieves state-of-the-art rank-1 scores on the FRGC v2.0, Bosphorus, and BU-3DFE 3D face databases.
- Score: 9.159921061636695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods have brought many breakthroughs to computer vision,
especially in 2D face recognition. However, the bottleneck of deep learning
based 3D face recognition is that it is difficult to collect millions of 3D
faces, whether for industry or academia. In view of this situation, there are
many methods to generate more 3D faces from existing 3D faces through 3D face
data augmentation, which are used to train deep 3D face recognition models.
However, to the best of our knowledge, there is no method to generate 3D faces
from 2D face images for training deep 3D face recognition models. This letter
focuses on the role of reconstructed 3D facial surfaces in 3D face
identification and proposes a framework of 2D-aided deep 3D face
identification. In particular, we propose to reconstruct millions of 3D face
scans from a large scale 2D face database (i.e.VGGFace2), using a deep learning
based 3D face reconstruction method (i.e.ExpNet). Then, we adopt a two-phase
training approach: In the first phase, we use millions of face images to
pre-train the deep convolutional neural network (DCNN), and in the second
phase, we use normal component images (NCI) of reconstructed 3D face scans to
train the DCNN. Extensive experimental results illustrate that the proposed
approach can greatly improve the rank-1 score of 3D face identification on the
FRGC v2.0, the Bosphorus, and the BU-3DFE 3D face databases, compared to the
model trained by 2D face images. Finally, our proposed approach achieves
state-of-the-art rank-1 scores on the FRGC v2.0 (97.6%), Bosphorus (98.4%), and
BU-3DFE (98.8%) databases. The experimental results show that the reconstructed
3D facial surfaces are useful and our 2D-aided deep 3D face identification
framework is meaningful, facing the scarcity of 3D faces.
Related papers
- FaceGPT: Self-supervised Learning to Chat about 3D Human Faces [69.4651241319356]
We introduce FaceGPT, a self-supervised learning framework for Large Vision-Language Models (VLMs) to reason about 3D human faces from images and text.
FaceGPT overcomes this limitation by embedding the parameters of a 3D morphable face model (3DMM) into the token space of a VLM.
We show that FaceGPT achieves high-quality 3D face reconstructions and retains the ability for general-purpose visual instruction following.
arXiv Detail & Related papers (2024-06-11T11:13:29Z) - Fake It Without Making It: Conditioned Face Generation for Accurate 3D
Face Reconstruction [5.079602839359523]
We present a method to generate a large-scale synthesised dataset of 250K photorealistic images and their corresponding shape parameters and depth maps, which we call SynthFace.
Our synthesis method conditions Stable Diffusion on depth maps sampled from the FLAME 3D Morphable Model (3DMM) of the human face, allowing us to generate a diverse set of shape-consistent facial images that is designed to be balanced in race and gender.
We propose ControlFace, a deep neural network, trained on SynthFace, which achieves competitive performance on the NoW benchmark, without requiring 3D supervision or manual 3D asset creation.
arXiv Detail & Related papers (2023-07-25T16:42:06Z) - Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
arXiv Detail & Related papers (2022-11-25T09:15:38Z) - 3D Face Parsing via Surface Parameterization and 2D Semantic
Segmentation Network [7.483526784933532]
Face parsing assigns pixel-wise semantic labels as the face representation for computers.
Recent works introduced different methods for 3D surface segmentation, while the performance is still limited.
We propose a method based on the "3D-2D-3D" strategy to accomplish 3D face parsing.
arXiv Detail & Related papers (2022-06-18T15:21:24Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Gait Recognition in the Wild with Dense 3D Representations and A
Benchmark [86.68648536257588]
Existing studies for gait recognition are dominated by 2D representations like the silhouette or skeleton of the human body in constrained scenes.
This paper aims to explore dense 3D representations for gait recognition in the wild.
We build the first large-scale 3D representation-based gait recognition dataset, named Gait3D.
arXiv Detail & Related papers (2022-04-06T03:54:06Z) - Generating Dataset For Large-scale 3D Facial Emotion Recognition [7.310043452300736]
We propose a method for generating a large dataset of 3D faces with labeled emotions.
We also develop a deep convolutional neural network for 3D FER trained on 624,000 3D facial scans.
arXiv Detail & Related papers (2021-09-16T15:12:41Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Multi-channel Deep 3D Face Recognition [4.726009758066045]
The accuracy of 2D face recognition is still challenged by the change of pose, illumination, make-up, and expression.
We propose a multi-Channel deep 3D face network for face recognition based on 3D face data.
The face recognition accuracy of the multi-Channel deep 3D face network has achieved 98.6.
arXiv Detail & Related papers (2020-09-30T15:29:05Z) - Differential 3D Facial Recognition: Adding 3D to Your State-of-the-Art
2D Method [90.26041504667451]
We show that it is possible to adopt active illumination to enhance state-of-the-art 2D face recognition approaches with 3D features.
The proposed ideas can significantly boost face recognition performance and dramatically improve the robustness to spoofing attacks.
arXiv Detail & Related papers (2020-04-03T20:17:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.