Personalized Face Modeling for Improved Face Reconstruction and Motion
Retargeting
- URL: http://arxiv.org/abs/2007.06759v2
- Date: Fri, 17 Jul 2020 23:08:43 GMT
- Title: Personalized Face Modeling for Improved Face Reconstruction and Motion
Retargeting
- Authors: Bindita Chaudhuri, Noranart Vesdapunt, Linda Shapiro, Baoyuan Wang
- Abstract summary: We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters.
Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections.
Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions.
- Score: 22.24046752858929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional methods for image-based 3D face reconstruction and facial motion
retargeting fit a 3D morphable model (3DMM) to the face, which has limited
modeling capacity and fail to generalize well to in-the-wild data. Use of
deformation transfer or multilinear tensor as a personalized 3DMM for
blendshape interpolation does not address the fact that facial expressions
result in different local and global skin deformations in different persons.
Moreover, existing methods learn a single albedo per user which is not enough
to capture the expression-specific skin reflectance variations. We propose an
end-to-end framework that jointly learns a personalized face model per user and
per-frame facial motion parameters from a large corpus of in-the-wild videos of
user expressions. Specifically, we learn user-specific expression blendshapes
and dynamic (expression-specific) albedo maps by predicting personalized
corrections on top of a 3DMM prior. We introduce novel constraints to ensure
that the corrected blendshapes retain their semantic meanings and the
reconstructed geometry is disentangled from the albedo. Experimental results
show that our personalization accurately captures fine-grained facial dynamics
in a wide range of conditions and efficiently decouples the learned face model
from facial motion, resulting in more accurate face reconstruction and facial
motion retargeting compared to state-of-the-art methods.
Related papers
- SPARK: Self-supervised Personalized Real-time Monocular Face Capture [6.093606972415841]
Current state of the art approaches have the ability to regress parametric 3D face models in real-time across a wide range of identities.
We propose a method for high-precision 3D face capture taking advantage of a collection of unconstrained videos of a subject as prior information.
arXiv Detail & Related papers (2024-09-12T12:30:04Z) - ImFace++: A Sophisticated Nonlinear 3D Morphable Face Model with Implicit Neural Representations [25.016000421755162]
This paper presents a novel 3D morphable face model, named ImFace++, to learn a sophisticated and continuous space with implicit neural representations.
ImFace++ first constructs two explicitly disentangled deformation fields to model complex shapes associated with identities and expressions.
A refinement displacement field within the template space is further incorporated, enabling fine-grained learning of individual-specific facial details.
arXiv Detail & Related papers (2023-12-07T03:53:53Z) - BlendFields: Few-Shot Example-Driven Facial Modeling [35.86727715239676]
We introduce a method that bridges the gap by drawing inspiration from traditional computer graphics techniques.
Unseen expressions are modeled by blending appearance from a sparse set of extreme poses.
We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.
arXiv Detail & Related papers (2023-05-12T14:30:07Z) - ImFace: A Nonlinear 3D Morphable Face Model with Implicit Neural
Representations [21.389170615787368]
This paper presents a novel 3D morphable face model, namely ImFace, to learn a nonlinear and continuous space with implicit neural representations.
It builds two explicitly disentangled deformation fields to model complex shapes associated with identities and expressions, respectively, and designs an improved learning strategy to extend embeddings of expressions.
In addition to ImFace, an effective preprocessing pipeline is proposed to address the issue of watertight input requirement in implicit representations.
arXiv Detail & Related papers (2022-03-28T05:37:59Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - Learning Complete 3D Morphable Face Models from Images and Videos [88.34033810328201]
We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos.
We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.
arXiv Detail & Related papers (2020-10-04T20:51:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.