AvatarGen: A 3D Generative Model for Animatable Human Avatars
- URL: http://arxiv.org/abs/2211.14589v1
- Date: Sat, 26 Nov 2022 15:15:45 GMT
- Title: AvatarGen: A 3D Generative Model for Animatable Human Avatars
- Authors: Jianfeng Zhang and Zihang Jiang and Dingdong Yang and Hongyi Xu and
Yichun Shi and Guoxian Song and Zhongcong Xu and Xinchao Wang and Jiashi Feng
- Abstract summary: AvatarGen is an unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries.
Our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling.
It is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing.
- Score: 108.11137221845352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised generation of 3D-aware clothed humans with various appearances
and controllable geometries is important for creating virtual human avatars and
other AR/VR applications. Existing methods are either limited to rigid object
modeling, or not generative and thus unable to generate high-quality virtual
humans and animate them. In this work, we propose AvatarGen, the first method
that enables not only geometry-aware clothed human synthesis with high-fidelity
appearances but also disentangled human animation controllability, while only
requiring 2D images for training. Specifically, we decompose the generative 3D
human synthesis into pose-guided mapping and canonical representation with
predefined human pose and shape, such that the canonical representation can be
explicitly driven to different poses and shapes with the guidance of a 3D
parametric human model SMPL. AvatarGen further introduces a deformation network
to learn non-rigid deformations for modeling fine-grained geometric details and
pose-dependent dynamics. To improve the geometry quality of the generated human
avatars, it leverages the signed distance field as geometric proxy, which
allows more direct regularization from the 3D geometric priors of SMPL.
Benefiting from these designs, our method can generate animatable 3D human
avatars with high-quality appearance and geometry modeling, significantly
outperforming previous 3D GANs. Furthermore, it is competent for many
applications, e.g., single-view reconstruction, re-animation, and text-guided
synthesis/editing. Code and pre-trained model will be available at
http://jeff95.me/projects/avatargen.html.
Related papers
- En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - XAGen: 3D Expressive Human Avatars Generation [76.69560679209171]
XAGen is the first 3D generative model for human avatars capable of expressive control over body, face, and hands.
We propose a multi-part rendering technique that disentangles the synthesis of body, face, and hands.
Experiments show that XAGen surpasses state-of-the-art methods in terms of realism, diversity, and expressive control abilities.
arXiv Detail & Related papers (2023-11-22T18:30:42Z) - DreamHuman: Animatable 3D Avatars from Text [41.30635787166307]
We present DreamHuman, a method to generate realistic animatable 3D human avatar models solely from textual descriptions.
Our 3D models have diverse appearance, clothing, skin tones and body shapes, and significantly outperform both generic text-to-3D approaches and previous text-based 3D avatar generators in visual fidelity.
arXiv Detail & Related papers (2023-06-15T17:58:21Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.