DreamHuman: Animatable 3D Avatars from Text
- URL: http://arxiv.org/abs/2306.09329v1
- Date: Thu, 15 Jun 2023 17:58:21 GMT
- Title: DreamHuman: Animatable 3D Avatars from Text
- Authors: Nikos Kolotouros, Thiemo Alldieck, Andrei Zanfir, Eduard Gabriel
Bazavan, Mihai Fieraru, Cristian Sminchisescu
- Abstract summary: We present DreamHuman, a method to generate realistic animatable 3D human avatar models solely from textual descriptions.
Our 3D models have diverse appearance, clothing, skin tones and body shapes, and significantly outperform both generic text-to-3D approaches and previous text-based 3D avatar generators in visual fidelity.
- Score: 41.30635787166307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present DreamHuman, a method to generate realistic animatable 3D human
avatar models solely from textual descriptions. Recent text-to-3D methods have
made considerable strides in generation, but are still lacking in important
aspects. Control and often spatial resolution remain limited, existing methods
produce fixed rather than animated 3D human models, and anthropometric
consistency for complex structures like people remains a challenge. DreamHuman
connects large text-to-image synthesis models, neural radiance fields, and
statistical human body models in a novel modeling and optimization framework.
This makes it possible to generate dynamic 3D human avatars with high-quality
textures and learned, instance-specific, surface deformations. We demonstrate
that our method is capable to generate a wide variety of animatable, realistic
3D human models from text. Our 3D models have diverse appearance, clothing,
skin tones and body shapes, and significantly outperform both generic
text-to-3D approaches and previous text-based 3D avatar generators in visual
fidelity. For more results and animations please check our website at
https://dream-human.github.io.
Related papers
- A Survey on 3D Human Avatar Modeling -- From Reconstruction to Generation [20.32107267981782]
3D human modeling, lying at the core of many real-world applications, has attracted significant attention.
This survey aims to provide a comprehensive overview of emerging techniques for 3D human avatar modeling.
arXiv Detail & Related papers (2024-06-06T16:58:00Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - AvatarGen: A 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is an unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries.
Our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling.
It is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing.
arXiv Detail & Related papers (2022-11-26T15:15:45Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z) - 3D-Aware Semantic-Guided Generative Model for Human Synthesis [67.86621343494998]
This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis.
Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines.
arXiv Detail & Related papers (2021-12-02T17:10:53Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.