XAGen: 3D Expressive Human Avatars Generation
- URL: http://arxiv.org/abs/2311.13574v1
- Date: Wed, 22 Nov 2023 18:30:42 GMT
- Title: XAGen: 3D Expressive Human Avatars Generation
- Authors: Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Jiashi Feng, Mike Zheng
Shou
- Abstract summary: XAGen is the first 3D generative model for human avatars capable of expressive control over body, face, and hands.
We propose a multi-part rendering technique that disentangles the synthesis of body, face, and hands.
Experiments show that XAGen surpasses state-of-the-art methods in terms of realism, diversity, and expressive control abilities.
- Score: 76.69560679209171
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in 3D-aware GAN models have enabled the generation of
realistic and controllable human body images. However, existing methods focus
on the control of major body joints, neglecting the manipulation of expressive
attributes, such as facial expressions, jaw poses, hand poses, and so on. In
this work, we present XAGen, the first 3D generative model for human avatars
capable of expressive control over body, face, and hands. To enhance the
fidelity of small-scale regions like face and hands, we devise a multi-scale
and multi-part 3D representation that models fine details. Based on this
representation, we propose a multi-part rendering technique that disentangles
the synthesis of body, face, and hands to ease model training and enhance
geometric quality. Furthermore, we design multi-part discriminators that
evaluate the quality of the generated avatars with respect to their appearance
and fine-grained control capabilities. Experiments show that XAGen surpasses
state-of-the-art methods in terms of realism, diversity, and expressive control
abilities. Code and data will be made available at
https://showlab.github.io/xagen.
Related papers
- AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z) - X-Avatar: Expressive Human Avatars [33.24502928725897]
We present X-Avatar, a novel avatar model that captures the full expressiveness of digital humans to bring about life-like experiences in telepresence, AR/VR and beyond.
Our method models bodies, hands, facial expressions and appearance in a holistic fashion and can be learned from either full 3D scans or RGB-D data.
arXiv Detail & Related papers (2023-03-08T18:59:39Z) - AvatarGen: A 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is an unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries.
Our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling.
It is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing.
arXiv Detail & Related papers (2022-11-26T15:15:45Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.