Learning Complete 3D Morphable Face Models from Images and Videos
- URL: http://arxiv.org/abs/2010.01679v1
- Date: Sun, 4 Oct 2020 20:51:23 GMT
- Title: Learning Complete 3D Morphable Face Models from Images and Videos
- Authors: Mallikarjun B R and Ayush Tewari and Hans-Peter Seidel and Mohamed
Elgharib and Christian Theobalt
- Abstract summary: We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos.
We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.
- Score: 88.34033810328201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most 3D face reconstruction methods rely on 3D morphable models, which
disentangle the space of facial deformations into identity geometry,
expressions and skin reflectance. These models are typically learned from a
limited number of 3D scans and thus do not generalize well across different
identities and expressions. We present the first approach to learn complete 3D
models of face identity geometry, albedo and expression just from images and
videos. The virtually endless collection of such data, in combination with our
self-supervised learning-based approach allows for learning face models that
generalize beyond the span of existing approaches. Our network design and loss
functions ensure a disentangled parameterization of not only identity and
albedo, but also, for the first time, an expression basis. Our method also
allows for in-the-wild monocular reconstruction at test time. We show that our
learned models better generalize and lead to higher quality image-based
reconstructions than existing approaches.
Related papers
- SPARK: Self-supervised Personalized Real-time Monocular Face Capture [6.093606972415841]
Current state of the art approaches have the ability to regress parametric 3D face models in real-time across a wide range of identities.
We propose a method for high-precision 3D face capture taking advantage of a collection of unconstrained videos of a subject as prior information.
arXiv Detail & Related papers (2024-09-12T12:30:04Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Identity-Expression Ambiguity in 3D Morphable Face Models [5.38250259923059]
We show that non-orthogonality of the variation in identity and expression can cause identity-expression ambiguity in 3D Morphable Models.
We demonstrate this effect with 3D shapes directly as well as through an inverse rendering task.
arXiv Detail & Related papers (2021-09-29T06:11:43Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - Learning 3D Face Reconstruction with a Pose Guidance Network [49.13404714366933]
We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN)
First, we unveil the bottleneck of pose estimation in prior parametric 3D face learning methods, and propose to utilize 3D face landmarks for estimating pose parameters.
With our specially designed PGN, our model can learn from both faces with fully labeled 3D landmarks and unlimited unlabeled in-the-wild face images.
arXiv Detail & Related papers (2020-10-09T06:11:17Z) - Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware
Multi-view Geometry Consistency [40.56510679634943]
We propose a self-supervised training architecture by leveraging the multi-view geometry consistency.
We design three novel loss functions for multi-view consistency, including the pixel consistency loss, the depth consistency loss, and the facial landmark-based epipolar loss.
Our method is accurate and robust, especially under large variations of expressions, poses, and illumination conditions.
arXiv Detail & Related papers (2020-07-24T12:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.