Learning an Animatable Detailed 3D Face Model from In-The-Wild Images
- URL: http://arxiv.org/abs/2012.04012v1
- Date: Mon, 7 Dec 2020 19:30:45 GMT
- Title: Learning an Animatable Detailed 3D Face Model from In-The-Wild Images
- Authors: Yao Feng and Haiwen Feng and Michael J. Black and Timo Bolkart
- Abstract summary: We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
- Score: 50.09971525995828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While current monocular 3D face reconstruction methods can recover fine
geometric details, they suffer several limitations. Some methods produce faces
that cannot be realistically animated because they do not model how wrinkles
vary with expression. Other methods are trained on high-quality face scans and
do not generalize well to in-the-wild images. We present the first approach to
jointly learn a model with animatable detail and a detailed 3D face regressor
from in-the-wild images that recovers shape details as well as their
relationship to facial expressions. Our DECA (Detailed Expression Capture and
Animation) model is trained to robustly produce a UV displacement map from a
low-dimensional latent representation that consists of person-specific detail
parameters and generic expression parameters, while a regressor is trained to
predict detail, shape, albedo, expression, pose and illumination parameters
from a single image. We introduce a novel detail-consistency loss to
disentangle person-specific details and expression-dependent wrinkles. This
disentanglement allows us to synthesize realistic person-specific wrinkles by
controlling expression parameters while keeping person-specific details
unchanged. DECA achieves state-of-the-art shape reconstruction accuracy on two
benchmarks. Qualitative results on in-the-wild data demonstrate DECA's
robustness and its ability to disentangle identity and expression dependent
details enabling animation of reconstructed faces. The model and code are
publicly available at https://github.com/YadiraF/DECA.
Related papers
- GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and
Dynamic Details [66.74088288846491]
HiFace aims at high-fidelity 3D face reconstruction with dynamic and static details.
We exploit several loss functions to jointly learn the coarse shape and fine details with both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-03-20T16:07:02Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Personalized Face Modeling for Improved Face Reconstruction and Motion
Retargeting [22.24046752858929]
We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters.
Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections.
Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions.
arXiv Detail & Related papers (2020-07-14T01:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.