AvatarGen: a 3D Generative Model for Animatable Human Avatars
- URL: http://arxiv.org/abs/2208.00561v1
- Date: Mon, 1 Aug 2022 01:27:02 GMT
- Title: AvatarGen: a 3D Generative Model for Animatable Human Avatars
- Authors: Jianfeng Zhang and Zihang Jiang and Dingdong Yang and Hongyi Xu and
Yichun Shi and Guoxian Song and Zhongcong Xu and Xinchao Wang and Jiashi Feng
- Abstract summary: AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
- Score: 108.11137221845352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised generation of clothed virtual humans with various appearance and
animatable poses is important for creating 3D human avatars and other AR/VR
applications. Existing methods are either limited to rigid object modeling, or
not generative and thus unable to synthesize high-quality virtual humans and
animate them. In this work, we propose AvatarGen, the first method that enables
not only non-rigid human generation with diverse appearance but also full
control over poses and viewpoints, while only requiring 2D images for training.
Specifically, it extends the recent 3D GANs to clothed human generation by
utilizing a coarse human body model as a proxy to warp the observation space
into a standard avatar under a canonical space. To model non-rigid dynamics, it
introduces a deformation network to learn pose-dependent deformations in the
canonical space. To improve geometry quality of the generated human avatars, it
leverages signed distance field as geometric representation, which allows more
direct regularization from the body model on the geometry learning. Benefiting
from these designs, our method can generate animatable human avatars with
high-quality appearance and geometry modeling, significantly outperforming
previous 3D GANs. Furthermore, it is competent for many applications, e.g.,
single-view reconstruction, reanimation, and text-guided synthesis. Code and
pre-trained model will be available.
Related papers
- Dynamic Appearance Modeling of Clothed 3D Human Avatars using a Single
Camera [8.308263758475938]
We introduce a method for high-quality modeling of clothed 3D human avatars using a video of a person with dynamic movements.
For explicit modeling, a neural network learns to generate point-wise shape residuals and appearance features of a 3D body model.
For implicit modeling, an implicit network combines the appearance and 3D motion features to decode high-fidelity clothed 3D human avatars.
arXiv Detail & Related papers (2023-12-28T06:04:39Z) - XAGen: 3D Expressive Human Avatars Generation [76.69560679209171]
XAGen is the first 3D generative model for human avatars capable of expressive control over body, face, and hands.
We propose a multi-part rendering technique that disentangles the synthesis of body, face, and hands.
Experiments show that XAGen surpasses state-of-the-art methods in terms of realism, diversity, and expressive control abilities.
arXiv Detail & Related papers (2023-11-22T18:30:42Z) - DreamHuman: Animatable 3D Avatars from Text [41.30635787166307]
We present DreamHuman, a method to generate realistic animatable 3D human avatar models solely from textual descriptions.
Our 3D models have diverse appearance, clothing, skin tones and body shapes, and significantly outperform both generic text-to-3D approaches and previous text-based 3D avatar generators in visual fidelity.
arXiv Detail & Related papers (2023-06-15T17:58:21Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z) - AvatarGen: A 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is an unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries.
Our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling.
It is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing.
arXiv Detail & Related papers (2022-11-26T15:15:45Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.