Implicit Neural Head Synthesis via Controllable Local Deformation Fields
- URL: http://arxiv.org/abs/2304.11113v1
- Date: Fri, 21 Apr 2023 16:35:28 GMT
- Title: Implicit Neural Head Synthesis via Controllable Local Deformation Fields
- Authors: Chuhan Chen, Matthew O'Toole, Gaurav Bharaj, Pablo Garrido
- Abstract summary: We build on part-based implicit shape models that decompose a global deformation field into local ones.
Our novel formulation models multiple implicit deformation fields with local semantic rig-like control via 3DMM-based parameters.
Our formulation renders sharper locally controllable nonlinear deformations than previous implicit monocular approaches.
- Score: 12.191729556779972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-quality reconstruction of controllable 3D head avatars from 2D videos is
highly desirable for virtual human applications in movies, games, and
telepresence. Neural implicit fields provide a powerful representation to model
3D head avatars with personalized shape, expressions, and facial parts, e.g.,
hair and mouth interior, that go beyond the linear 3D morphable model (3DMM).
However, existing methods do not model faces with fine-scale facial features,
or local control of facial parts that extrapolate asymmetric expressions from
monocular videos. Further, most condition only on 3DMM parameters with poor(er)
locality, and resolve local features with a global neural field. We build on
part-based implicit shape models that decompose a global deformation field into
local ones. Our novel formulation models multiple implicit deformation fields
with local semantic rig-like control via 3DMM-based parameters, and
representative facial landmarks. Further, we propose a local control loss and
attention mask mechanism that promote sparsity of each learned deformation
field. Our formulation renders sharper locally controllable nonlinear
deformations than previous implicit monocular approaches, especially mouth
interior, asymmetric expressions, and facial details.
Related papers
- Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos [47.94545609011594]
We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild.
Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism.
arXiv Detail & Related papers (2023-04-04T01:10:04Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - Controllable 3D Generative Adversarial Face Model via Disentangling
Shape and Appearance [63.13801759915835]
3D face modeling has been an active area of research in computer vision and computer graphics.
This paper proposes a new 3D face generative model that can decouple identity and expression.
arXiv Detail & Related papers (2022-08-30T13:40:48Z) - ImFace: A Nonlinear 3D Morphable Face Model with Implicit Neural
Representations [21.389170615787368]
This paper presents a novel 3D morphable face model, namely ImFace, to learn a nonlinear and continuous space with implicit neural representations.
It builds two explicitly disentangled deformation fields to model complex shapes associated with identities and expressions, respectively, and designs an improved learning strategy to extend embeddings of expressions.
In addition to ImFace, an effective preprocessing pipeline is proposed to address the issue of watertight input requirement in implicit representations.
arXiv Detail & Related papers (2022-03-28T05:37:59Z) - I M Avatar: Implicit Morphable Head Avatars from Videos [68.13409777995392]
We propose IMavatar, a novel method for learning implicit head avatars from monocular videos.
Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose-related deformations via learned blendshapes and skinning fields.
We show quantitatively and qualitatively that our method improves geometry and covers a more complete expression space compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-14T15:30:32Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Personalized Face Modeling for Improved Face Reconstruction and Motion
Retargeting [22.24046752858929]
We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters.
Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections.
Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions.
arXiv Detail & Related papers (2020-07-14T01:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.