FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face
Reconstruction
- URL: http://arxiv.org/abs/2111.01082v2
- Date: Fri, 15 Sep 2023 20:00:07 GMT
- Title: FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face
Reconstruction
- Authors: Hao Zhu, Haotian Yang, Longwei Guo, Yidi Zhang, Yanru Wang, Mingkai
Huang, Menghua Wu, Qiu Shen, Ruigang Yang, Xun Cao
- Abstract summary: We present a large-scale detailed 3D face dataset, FaceScape, and the corresponding benchmark to evaluate single-view facial 3D reconstruction.
By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input.
We also use FaceScape data to generate the in-the-wild and in-the-lab benchmark to evaluate recent methods of single-view face reconstruction.
- Score: 29.920622006999732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a large-scale detailed 3D face dataset, FaceScape,
and the corresponding benchmark to evaluate single-view facial 3D
reconstruction. By training on FaceScape data, a novel algorithm is proposed to
predict elaborate riggable 3D face models from a single image input. FaceScape
dataset releases $16,940$ textured 3D faces, captured from $847$ subjects and
each with $20$ specific expressions. The 3D models contain the pore-level
facial geometry that is also processed to be topologically uniform. These fine
3D facial models can be represented as a 3D morphable model for coarse shapes
and displacement maps for detailed geometry. Taking advantage of the
large-scale and high-accuracy dataset, a novel algorithm is further proposed to
learn the expression-specific dynamic details using a deep neural network. The
learned relationship serves as the foundation of our 3D face prediction system
from a single image input. Different from most previous methods, our predicted
3D models are riggable with highly detailed geometry under different
expressions. We also use FaceScape data to generate the in-the-wild and
in-the-lab benchmark to evaluate recent methods of single-view face
reconstruction. The accuracy is reported and analyzed on the dimensions of
camera pose and focal length, which provides a faithful and comprehensive
evaluation and reveals new challenges. The unprecedented dataset, benchmark,
and code have been released at https://github.com/zhuhao-nju/facescape.
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized
Optimization [17.938604013181426]
We propose NeuFace, a 3D face mesh pseudo annotation method on videos.
We annotate the per-view/frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset.
By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks.
arXiv Detail & Related papers (2023-10-04T23:24:22Z) - A lightweight 3D dense facial landmark estimation model from position
map data [0.8508775813669867]
We propose a pipeline to create a dense keypoint training dataset containing 520 key points across the whole face.
We train a lightweight MobileNet-based regressor model with the generated data.
Experimental results show that our trained model outperforms many of the existing methods in spite of its lower model size and minimal computational cost.
arXiv Detail & Related papers (2023-08-29T09:53:10Z) - RAFaRe: Learning Robust and Accurate Non-parametric 3D Face
Reconstruction from Pseudo 2D&3D Pairs [13.11105614044699]
We propose a robust and accurate non-parametric method for single-view 3D face reconstruction (SVFR)
A large-scale pseudo 2D&3D dataset is created by first rendering the detailed 3D faces, then swapping the face in the wild images with the rendered face.
Our model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks.
arXiv Detail & Related papers (2023-02-10T19:40:26Z) - Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
arXiv Detail & Related papers (2022-11-25T09:15:38Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - 3D Face Morphing Attacks: Generation, Vulnerability and Detection [3.700129710233692]
Face Recognition systems have been found to be vulnerable to morphing attacks.
This work presents a novel direction for generating face-morphing attacks in 3D.
arXiv Detail & Related papers (2022-01-10T16:53:39Z) - FaceDet3D: Facial Expressions with 3D Geometric Detail Prediction [62.5557724039217]
Facial expressions induce a variety of high-level details on the 3D face geometry.
Morphable Models (3DMMs) of the human face fail to capture such fine details in their PCA-based representations.
We introduce FaceDet3D, a first-of-its-kind method that generates - from a single image - geometric facial details consistent with any desired target expression.
arXiv Detail & Related papers (2020-12-14T23:07:38Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed
Riggable 3D Face Prediction [39.95272819738226]
We present a novel algorithm that is able to predict elaborate riggable 3D face models from a single image input.
FaceScape dataset provides 18,760 textured 3D faces, captured from 938 subjects and each with 20 specific expressions.
arXiv Detail & Related papers (2020-03-31T07:11:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.