En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data
- URL: http://arxiv.org/abs/2401.01173v1
- Date: Tue, 2 Jan 2024 12:06:31 GMT
- Title: En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data
- Authors: Yifang Men, Biwen Lei, Yuan Yao, Miaomiao Cui, Zhouhui Lian, Xuansong
Xie
- Abstract summary: We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
- Score: 36.51674664590734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present En3D, an enhanced generative scheme for sculpting high-quality 3D
human avatars. Unlike previous works that rely on scarce 3D datasets or limited
2D collections with imbalanced viewing angles and imprecise pose priors, our
approach aims to develop a zero-shot 3D generative scheme capable of producing
visually realistic, geometrically accurate and content-wise diverse 3D humans
without relying on pre-existing 3D or 2D assets. To address this challenge, we
introduce a meticulously crafted workflow that implements accurate physical
modeling to learn the enhanced 3D generative model from synthetic 2D data.
During inference, we integrate optimization modules to bridge the gap between
realistic appearances and coarse 3D shapes. Specifically, En3D comprises three
modules: a 3D generator that accurately models generalizable 3D humans with
realistic appearance from synthesized balanced, diverse, and structured human
images; a geometry sculptor that enhances shape quality using multi-view normal
constraints for intricate human anatomy; and a texturing module that
disentangles explicit texture maps with fidelity and editability, leveraging
semantical UV partitioning and a differentiable rasterizer. Experimental
results show that our approach significantly outperforms prior works in terms
of image quality, geometry accuracy and content diversity. We also showcase the
applicability of our generated avatars for animation and editing, as well as
the scalability of our approach for content-style free adaptation.
Related papers
- Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors [17.544733016978928]
3D object generation from a single image involves estimating the full 3D geometry and texture of unseen views from an unposed RGB image captured in the wild.
Recent advancements in 3D object generation have introduced techniques that reconstruct an object's 3D shape and texture.
We propose bridging the gap between 2D and 3D diffusion models to address this limitation.
arXiv Detail & Related papers (2024-10-12T10:14:11Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - CC3D: Layout-Conditioned Generation of Compositional 3D Scenes [49.281006972028194]
We introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts.
Our evaluations on synthetic 3D-FRONT and real-world KITTI-360 datasets demonstrate that our model generates scenes of improved visual and geometric quality.
arXiv Detail & Related papers (2023-03-21T17:59:02Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - EVA3D: Compositional 3D Human Generation from 2D Image Collections [27.70991135165909]
EVA3D is an unconditional 3D human generative model learned from 2D image collections only.
It can sample 3D humans with detailed geometry and render high-quality images (up to 512x256) without bells and whistles.
It achieves state-of-the-art 3D human generation performance regarding both geometry and texture quality.
arXiv Detail & Related papers (2022-10-10T17:59:31Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - 3D-Aware Semantic-Guided Generative Model for Human Synthesis [67.86621343494998]
This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis.
Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines.
arXiv Detail & Related papers (2021-12-02T17:10:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.