X-Avatar: Expressive Human Avatars
- URL: http://arxiv.org/abs/2303.04805v2
- Date: Thu, 9 Mar 2023 13:13:07 GMT
- Title: X-Avatar: Expressive Human Avatars
- Authors: Kaiyue Shen, Chen Guo, Manuel Kaufmann, Juan Jose Zarate, Julien
Valentin, Jie Song, Otmar Hilliges
- Abstract summary: We present X-Avatar, a novel avatar model that captures the full expressiveness of digital humans to bring about life-like experiences in telepresence, AR/VR and beyond.
Our method models bodies, hands, facial expressions and appearance in a holistic fashion and can be learned from either full 3D scans or RGB-D data.
- Score: 33.24502928725897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present X-Avatar, a novel avatar model that captures the full
expressiveness of digital humans to bring about life-like experiences in
telepresence, AR/VR and beyond. Our method models bodies, hands, facial
expressions and appearance in a holistic fashion and can be learned from either
full 3D scans or RGB-D data. To achieve this, we propose a part-aware learned
forward skinning module that can be driven by the parameter space of SMPL-X,
allowing for expressive animation of X-Avatars. To efficiently learn the neural
shape and deformation fields, we propose novel part-aware sampling and
initialization strategies. This leads to higher fidelity results, especially
for smaller body parts while maintaining efficient training despite increased
number of articulated bones. To capture the appearance of the avatar with
high-frequency details, we extend the geometry and deformation fields with a
texture network that is conditioned on pose, facial expression, geometry and
the normals of the deformed surface. We show experimentally that our method
outperforms strong baselines in both data domains both quantitatively and
qualitatively on the animation task. To facilitate future research on
expressive avatars we contribute a new dataset, called X-Humans, containing 233
sequences of high-quality textured scans from 20 participants, totalling 35,500
data frames.
Related papers
- Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians [51.46168990249278]
We present an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.
GustafAvatar is validated on both the public dataset and our collected dataset.
arXiv Detail & Related papers (2023-12-04T18:55:45Z) - GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar [48.21353924040671]
We propose to learn person-specific animatable avatars from images without assuming to have access to precise facial expression tracking.
We learn a mapping from 3DMM facial expression parameters to the latent space of the generative model.
With this scheme, we decouple 3D appearance reconstruction and animation control to achieve high fidelity in image synthesis.
arXiv Detail & Related papers (2023-11-22T19:13:00Z) - XAGen: 3D Expressive Human Avatars Generation [76.69560679209171]
XAGen is the first 3D generative model for human avatars capable of expressive control over body, face, and hands.
We propose a multi-part rendering technique that disentangles the synthesis of body, face, and hands.
Experiments show that XAGen surpasses state-of-the-art methods in terms of realism, diversity, and expressive control abilities.
arXiv Detail & Related papers (2023-11-22T18:30:42Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.