FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable
Model from a Hybrid Dataset
- URL: http://arxiv.org/abs/2203.14057v2
- Date: Tue, 29 Mar 2022 03:13:33 GMT
- Title: FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable
Model from a Hybrid Dataset
- Authors: Lizhen Wang, Zhiyuan Chen, Tao Yu, Chenguang Ma, Liang Li, Yebin Liu
- Abstract summary: FaceVerse is built from hybrid East Asian face datasets containing 60K fused RGB-D images and 2K high-fidelity 3D head scan models.
In the coarse module, we generate a base parametric model from large-scale RGB-D images, which is able to predict accurate rough 3D face models in different genders, ages, etc.
In the fine module, a conditional StyleGAN architecture trained with high-fidelity scan models is introduced to enrich elaborate facial geometric and texture details.
- Score: 36.688730105295015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present FaceVerse, a fine-grained 3D Neural Face Model, which is built
from hybrid East Asian face datasets containing 60K fused RGB-D images and 2K
high-fidelity 3D head scan models. A novel coarse-to-fine structure is proposed
to take better advantage of our hybrid dataset. In the coarse module, we
generate a base parametric model from large-scale RGB-D images, which is able
to predict accurate rough 3D face models in different genders, ages, etc. Then
in the fine module, a conditional StyleGAN architecture trained with
high-fidelity scan models is introduced to enrich elaborate facial geometric
and texture details. Note that different from previous methods, our base and
detailed modules are both changeable, which enables an innovative application
of adjusting both the basic attributes and the facial details of 3D face
models. Furthermore, we propose a single-image fitting framework based on
differentiable rendering. Rich experiments show that our method outperforms the
state-of-the-art methods.
Related papers
- Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face
Reconstruction [29.920622006999732]
We present a large-scale detailed 3D face dataset, FaceScape, and the corresponding benchmark to evaluate single-view facial 3D reconstruction.
By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input.
We also use FaceScape data to generate the in-the-wild and in-the-lab benchmark to evaluate recent methods of single-view face reconstruction.
arXiv Detail & Related papers (2021-11-01T16:48:34Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - JNR: Joint-based Neural Rig Representation for Compact 3D Face Modeling [22.584569656416864]
We introduce a novel approach to learn a 3D face model using a joint-based face rig and a neural skinning network.
Thanks to the joint-based representation, our model enjoys some significant advantages over prior blendshape-based models.
arXiv Detail & Related papers (2020-07-14T01:21:37Z) - FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed
Riggable 3D Face Prediction [39.95272819738226]
We present a novel algorithm that is able to predict elaborate riggable 3D face models from a single image input.
FaceScape dataset provides 18,760 textured 3D faces, captured from 938 subjects and each with 20 specific expressions.
arXiv Detail & Related papers (2020-03-31T07:11:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.