BlendFields: Few-Shot Example-Driven Facial Modeling
- URL: http://arxiv.org/abs/2305.07514v1
- Date: Fri, 12 May 2023 14:30:07 GMT
- Title: BlendFields: Few-Shot Example-Driven Facial Modeling
- Authors: Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia
Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzci\'nski, Marek Kowalski
- Abstract summary: We introduce a method that bridges the gap by drawing inspiration from traditional computer graphics techniques.
Unseen expressions are modeled by blending appearance from a sparse set of extreme poses.
We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.
- Score: 35.86727715239676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating faithful visualizations of human faces requires capturing both
coarse and fine-level details of the face geometry and appearance. Existing
methods are either data-driven, requiring an extensive corpus of data not
publicly accessible to the research community, or fail to capture fine details
because they rely on geometric face models that cannot represent fine-grained
details in texture with a mesh discretization and linear deformation designed
to model only a coarse face geometry. We introduce a method that bridges this
gap by drawing inspiration from traditional computer graphics techniques.
Unseen expressions are modeled by blending appearance from a sparse set of
extreme poses. This blending is performed by measuring local volumetric changes
in those expressions and locally reproducing their appearance whenever a
similar expression is performed at test time. We show that our method
generalizes to unseen expressions, adding fine-grained effects on top of smooth
volumetric deformations of a face, and demonstrate how it generalizes beyond
faces.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - ImFace++: A Sophisticated Nonlinear 3D Morphable Face Model with Implicit Neural Representations [25.016000421755162]
This paper presents a novel 3D morphable face model, named ImFace++, to learn a sophisticated and continuous space with implicit neural representations.
ImFace++ first constructs two explicitly disentangled deformation fields to model complex shapes associated with identities and expressions.
A refinement displacement field within the template space is further incorporated, enabling fine-grained learning of individual-specific facial details.
arXiv Detail & Related papers (2023-12-07T03:53:53Z) - GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - ImFace: A Nonlinear 3D Morphable Face Model with Implicit Neural
Representations [21.389170615787368]
This paper presents a novel 3D morphable face model, namely ImFace, to learn a nonlinear and continuous space with implicit neural representations.
It builds two explicitly disentangled deformation fields to model complex shapes associated with identities and expressions, respectively, and designs an improved learning strategy to extend embeddings of expressions.
In addition to ImFace, an effective preprocessing pipeline is proposed to address the issue of watertight input requirement in implicit representations.
arXiv Detail & Related papers (2022-03-28T05:37:59Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - I M Avatar: Implicit Morphable Head Avatars from Videos [68.13409777995392]
We propose IMavatar, a novel method for learning implicit head avatars from monocular videos.
Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose-related deformations via learned blendshapes and skinning fields.
We show quantitatively and qualitatively that our method improves geometry and covers a more complete expression space compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-14T15:30:32Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - Learning Complete 3D Morphable Face Models from Images and Videos [88.34033810328201]
We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos.
We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.
arXiv Detail & Related papers (2020-10-04T20:51:23Z) - Personalized Face Modeling for Improved Face Reconstruction and Motion
Retargeting [22.24046752858929]
We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters.
Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections.
Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions.
arXiv Detail & Related papers (2020-07-14T01:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.