I M Avatar: Implicit Morphable Head Avatars from Videos
- URL: http://arxiv.org/abs/2112.07471v2
- Date: Wed, 15 Dec 2021 15:55:34 GMT
- Title: I M Avatar: Implicit Morphable Head Avatars from Videos
- Authors: Yufeng Zheng, Victoria Fern\'andez Abrevaya, Xu Chen, Marcel C.
B\"uhler, Michael J. Black, Otmar Hilliges
- Abstract summary: We propose IMavatar, a novel method for learning implicit head avatars from monocular videos.
Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose-related deformations via learned blendshapes and skinning fields.
We show quantitatively and qualitatively that our method improves geometry and covers a more complete expression space compared to state-of-the-art methods.
- Score: 68.13409777995392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional morphable face models provide fine-grained control over
expression but cannot easily capture geometric and appearance details. Neural
volumetric representations approach photo-realism but are hard to animate and
do not generalize well to unseen expressions. To tackle this problem, we
propose IMavatar (Implicit Morphable avatar), a novel method for learning
implicit head avatars from monocular videos. Inspired by the fine-grained
control mechanisms afforded by conventional 3DMMs, we represent the expression-
and pose-related deformations via learned blendshapes and skinning fields.
These attributes are pose-independent and can be used to morph the canonical
geometry and texture fields given novel expression and pose parameters. We
employ ray tracing and iterative root-finding to locate the canonical surface
intersection for each pixel. A key contribution is our novel analytical
gradient formulation that enables end-to-end training of IMavatars from videos.
We show quantitatively and qualitatively that our method improves geometry and
covers a more complete expression space compared to state-of-the-art methods.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image [89.70322127648349]
We propose a generic avatar editing approach that can be universally applied to various 3DMM driving volumetric head avatars.
To achieve this goal, we design a novel expression-aware modification generative model, which enables lift 2D editing from a single image to a consistent 3D modification field.
arXiv Detail & Related papers (2024-04-02T17:58:35Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Neural Head Avatars from Monocular RGB Videos [0.0]
We present a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar.
Our representation can be learned from a monocular RGB portrait video that features a range of different expressions and views.
arXiv Detail & Related papers (2021-12-02T19:01:05Z) - VariTex: Variational Neural Face Textures [0.0]
VariTex is a method that learns a variational latent feature space of neural face textures.
To generate images of complete human heads, we propose an additive decoder that generates plausible additional details such as hair.
The resulting method can generate geometrically consistent images of novel identities allowing fine-grained control over head pose, face shape, and facial expressions.
arXiv Detail & Related papers (2021-04-13T07:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.