FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed
Riggable 3D Face Prediction
- URL: http://arxiv.org/abs/2003.13989v3
- Date: Tue, 21 Apr 2020 17:18:48 GMT
- Title: FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed
Riggable 3D Face Prediction
- Authors: Haotian Yang, Hao Zhu, Yanru Wang, Mingkai Huang, Qiu Shen, Ruigang
Yang, Xun Cao
- Abstract summary: We present a novel algorithm that is able to predict elaborate riggable 3D face models from a single image input.
FaceScape dataset provides 18,760 textured 3D faces, captured from 938 subjects and each with 20 specific expressions.
- Score: 39.95272819738226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a large-scale detailed 3D face dataset, FaceScape,
and propose a novel algorithm that is able to predict elaborate riggable 3D
face models from a single image input. FaceScape dataset provides 18,760
textured 3D faces, captured from 938 subjects and each with 20 specific
expressions. The 3D models contain the pore-level facial geometry that is also
processed to be topologically uniformed. These fine 3D facial models can be
represented as a 3D morphable model for rough shapes and displacement maps for
detailed geometry. Taking advantage of the large-scale and high-accuracy
dataset, a novel algorithm is further proposed to learn the expression-specific
dynamic details using a deep neural network. The learned relationship serves as
the foundation of our 3D face prediction system from a single image input.
Different than the previous methods, our predicted 3D models are riggable with
highly detailed geometry under different expressions. The unprecedented dataset
and code will be released to public for research purpose.
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized
Optimization [17.938604013181426]
We propose NeuFace, a 3D face mesh pseudo annotation method on videos.
We annotate the per-view/frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset.
By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks.
arXiv Detail & Related papers (2023-10-04T23:24:22Z) - A lightweight 3D dense facial landmark estimation model from position
map data [0.8508775813669867]
We propose a pipeline to create a dense keypoint training dataset containing 520 key points across the whole face.
We train a lightweight MobileNet-based regressor model with the generated data.
Experimental results show that our trained model outperforms many of the existing methods in spite of its lower model size and minimal computational cost.
arXiv Detail & Related papers (2023-08-29T09:53:10Z) - RAFaRe: Learning Robust and Accurate Non-parametric 3D Face
Reconstruction from Pseudo 2D&3D Pairs [13.11105614044699]
We propose a robust and accurate non-parametric method for single-view 3D face reconstruction (SVFR)
A large-scale pseudo 2D&3D dataset is created by first rendering the detailed 3D faces, then swapping the face in the wild images with the rendered face.
Our model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks.
arXiv Detail & Related papers (2023-02-10T19:40:26Z) - Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
arXiv Detail & Related papers (2022-11-25T09:15:38Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - 3D Face Morphing Attacks: Generation, Vulnerability and Detection [3.700129710233692]
Face Recognition systems have been found to be vulnerable to morphing attacks.
This work presents a novel direction for generating face-morphing attacks in 3D.
arXiv Detail & Related papers (2022-01-10T16:53:39Z) - FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face
Reconstruction [29.920622006999732]
We present a large-scale detailed 3D face dataset, FaceScape, and the corresponding benchmark to evaluate single-view facial 3D reconstruction.
By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input.
We also use FaceScape data to generate the in-the-wild and in-the-lab benchmark to evaluate recent methods of single-view face reconstruction.
arXiv Detail & Related papers (2021-11-01T16:48:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.