Democratizing the Creation of Animatable Facial Avatars
- URL: http://arxiv.org/abs/2401.16534v1
- Date: Mon, 29 Jan 2024 20:14:40 GMT
- Title: Democratizing the Creation of Animatable Facial Avatars
- Authors: Yilin Zhu, Dalton Omens, Haodi He, Ron Fedkiw
- Abstract summary: We propose a novel pipeline for obtaining geometry and texture without using a light stage or any other high-end hardware.
A key novel idea consists of warping real-world images to align with the geometry of a template avatar.
Not only can our method be used to obtain a neutral expression geometry and de-lit texture, but it can also be used to improve avatars after they have been imported into an animation system.
- Score: 2.1740466069378597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In high-end visual effects pipelines, a customized (and expensive) light
stage system is (typically) used to scan an actor in order to acquire both
geometry and texture for various expressions. Aiming towards democratization,
we propose a novel pipeline for obtaining geometry and texture as well as
enough expression information to build a customized person-specific animation
rig without using a light stage or any other high-end hardware (or manual
cleanup). A key novel idea consists of warping real-world images to align with
the geometry of a template avatar and subsequently projecting the warped image
into the template avatar's texture; importantly, this allows us to leverage
baked-in real-world lighting/texture information in order to create surrogate
facial features (and bridge the domain gap) for the sake of geometry
reconstruction. Not only can our method be used to obtain a neutral expression
geometry and de-lit texture, but it can also be used to improve avatars after
they have been imported into an animation system (noting that such imports tend
to be lossy, while also hallucinating various features). Since a default
animation rig will contain template expressions that do not correctly
correspond to those of a particular individual, we use a Simon Says approach to
capture various expressions and build a person-specific animation rig (that
moves like they do). Our aforementioned warping/projection method has high
enough efficacy to reconstruct geometry corresponding to each expressions.
Related papers
- TexVocab: Texture Vocabulary-conditioned Human Avatars [42.170169762733835]
TexVocab is a novel avatar representation that constructs a texture vocabulary and associates body poses with texture maps for animation.
Our method is able to create animatable human avatars with detailed and dynamic appearances from RGB videos.
arXiv Detail & Related papers (2024-03-31T01:58:04Z) - SEEAvatar: Photorealistic Text-to-3D Avatar Generation with Constrained
Geometry and Appearance [37.85026590250023]
We present SEEAvatar, a method for generating photorealistic 3D avatars from text.
For geometry, we propose to constrain the optimized avatar in a decent global shape with a template avatar.
For appearance generation, we use diffusion model enhanced by prompt engineering to guide a physically based rendering pipeline.
arXiv Detail & Related papers (2023-12-13T14:48:35Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - TADA! Text to Animatable Digital Avatars [57.52707683788961]
TADA takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures.
We derive an optimizable high-resolution body model from SMPL-X with 3D displacements and a texture map.
We render normals and RGB images of the generated character and exploit their latent embeddings in the SDS training process.
arXiv Detail & Related papers (2023-08-21T17:59:10Z) - AvatarReX: Real-time Expressive Full-body Avatars [35.09470037950997]
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data.
The learnt avatar not only provides expressive control of the body, hands and the face together, but also supports real-time animation and rendering.
arXiv Detail & Related papers (2023-05-08T15:43:00Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - AniPixel: Towards Animatable Pixel-Aligned Human Avatar [65.7175527782209]
AniPixel is a novel animatable and generalizable human avatar reconstruction method.
We propose a neural skinning field based on skeleton-driven deformation to establish the target-to-canonical and canonical-to-observation correspondences.
Experiments show that AniPixel renders comparable novel views while delivering better novel pose animation results than state-of-the-art methods.
arXiv Detail & Related papers (2023-02-07T11:04:14Z) - PointAvatar: Deformable Point-based Head Avatars from Videos [103.43941945044294]
PointAvatar is a deformable point-based representation that disentangles the source color into intrinsic albedo and normal-dependent shading.
We show that our method is able to generate animatable 3D avatars using monocular videos from multiple sources.
arXiv Detail & Related papers (2022-12-16T10:05:31Z) - I M Avatar: Implicit Morphable Head Avatars from Videos [68.13409777995392]
We propose IMavatar, a novel method for learning implicit head avatars from monocular videos.
Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose-related deformations via learned blendshapes and skinning fields.
We show quantitatively and qualitatively that our method improves geometry and covers a more complete expression space compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-14T15:30:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.