DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion
Prior
- URL: http://arxiv.org/abs/2310.16818v2
- Date: Thu, 26 Oct 2023 06:54:22 GMT
- Title: DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion
Prior
- Authors: Jingxiang Sun and Bo Zhang and Ruizhi Shao and Lizhen Wang and Wen Liu
and Zhenda Xie and Yebin Liu
- Abstract summary: We present DreamCraft3D, a hierarchical 3D content generation method that produces high-fidelity and coherent 3D objects.
We tackle the problem by leveraging a 2D reference image to guide the stages of geometry sculpting and texture boosting.
With tailored 3D priors throughout the hierarchical generation, DreamCraft3D generates coherent 3D objects with photorealistic renderings.
- Score: 40.67100127167502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present DreamCraft3D, a hierarchical 3D content generation method that
produces high-fidelity and coherent 3D objects. We tackle the problem by
leveraging a 2D reference image to guide the stages of geometry sculpting and
texture boosting. A central focus of this work is to address the consistency
issue that existing works encounter. To sculpt geometries that render
coherently, we perform score distillation sampling via a view-dependent
diffusion model. This 3D prior, alongside several training strategies,
prioritizes the geometry consistency but compromises the texture fidelity. We
further propose Bootstrapped Score Distillation to specifically boost the
texture. We train a personalized diffusion model, Dreambooth, on the augmented
renderings of the scene, imbuing it with 3D knowledge of the scene being
optimized. The score distillation from this 3D-aware diffusion prior provides
view-consistent guidance for the scene. Notably, through an alternating
optimization of the diffusion prior and 3D scene representation, we achieve
mutually reinforcing improvements: the optimized 3D scene aids in training the
scene-specific diffusion model, which offers increasingly view-consistent
guidance for 3D optimization. The optimization is thus bootstrapped and leads
to substantial texture boosting. With tailored 3D priors throughout the
hierarchical generation, DreamCraft3D generates coherent 3D objects with
photorealistic renderings, advancing the state-of-the-art in 3D content
generation. Code available at https://github.com/deepseek-ai/DreamCraft3D.
Related papers
- ScalingGaussian: Enhancing 3D Content Creation with Generative Gaussian Splatting [30.99112626706754]
The creation of high-quality 3D assets is paramount for applications in digital heritage, entertainment, and robotics.
Traditionally, this process necessitates skilled professionals and specialized software for modeling.
We introduce a novel 3D content creation framework, which generates 3D textures efficiently.
arXiv Detail & Related papers (2024-07-26T18:26:01Z) - SceneWiz3D: Towards Text-guided 3D Scene Composition [134.71933134180782]
Existing approaches either leverage large text-to-image models to optimize a 3D representation or train 3D generators on object-centric datasets.
We introduce SceneWiz3D, a novel approach to synthesize high-fidelity 3D scenes from text.
arXiv Detail & Related papers (2023-12-13T18:59:30Z) - Sherpa3D: Boosting High-Fidelity Text-to-3D Generation via Coarse 3D
Prior [52.44678180286886]
2D diffusion models find a distillation approach that achieves excellent generalization and rich details without any 3D data.
We propose Sherpa3D, a new text-to-3D framework that achieves high-fidelity, generalizability, and geometric consistency simultaneously.
arXiv Detail & Related papers (2023-12-11T18:59:18Z) - Text-to-3D using Gaussian Splatting [18.163413810199234]
This paper proposes GSGEN, a novel method that adopts Gaussian Splatting, a recent state-of-the-art representation, to text-to-3D generation.
GSGEN aims at generating high-quality 3D objects and addressing existing shortcomings by exploiting the explicit nature of Gaussian Splatting.
Our approach can generate 3D assets with delicate details and accurate geometry.
arXiv Detail & Related papers (2023-09-28T16:44:31Z) - Magic123: One Image to High-Quality 3D Object Generation Using Both 2D
and 3D Diffusion Priors [104.79392615848109]
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D meshes from a single unposed image.
In the first stage, we optimize a neural radiance field to produce a coarse geometry.
In the second stage, we adopt a memory-efficient differentiable mesh representation to yield a high-resolution mesh with a visually appealing texture.
arXiv Detail & Related papers (2023-06-30T17:59:08Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion
Prior [36.40582157854088]
In this work, we investigate the problem of creating high-fidelity 3D content from only a single image.
We leverage prior knowledge from a well-trained 2D diffusion model to act as 3D-aware supervision for 3D creation.
Our method presents the first attempt to achieve high-quality 3D creation from a single image for general objects and enables various applications such as text-to-3D creation and texture editing.
arXiv Detail & Related papers (2023-03-24T17:54:22Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.