HD-Fusion: Detailed Text-to-3D Generation Leveraging Multiple Noise
Estimation
- URL: http://arxiv.org/abs/2307.16183v1
- Date: Sun, 30 Jul 2023 09:46:22 GMT
- Title: HD-Fusion: Detailed Text-to-3D Generation Leveraging Multiple Noise
Estimation
- Authors: Jinbo Wu and Xiaobo Gao and Xing Liu and Zhengyang Shen and Chen Zhao
and Haocheng Feng and Jingtuo Liu and Errui Ding
- Abstract summary: We propose a novel approach that combines multiple noise estimation processes with a pretrained 2D diffusion prior.
Results show that the proposed approach can generate high-quality details compared to the baselines.
- Score: 43.83459204345063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study Text-to-3D content generation leveraging 2D diffusion
priors to enhance the quality and detail of the generated 3D models. Recent
progress (Magic3D) in text-to-3D has shown that employing high-resolution
(e.g., 512 x 512) renderings can lead to the production of high-quality 3D
models using latent diffusion priors. To enable rendering at even higher
resolutions, which has the potential to further augment the quality and detail
of the models, we propose a novel approach that combines multiple noise
estimation processes with a pretrained 2D diffusion prior. Distinct from the
Bar-Tal et al.s' study which binds multiple denoised results to generate images
from texts, our approach integrates the computation of scoring distillation
losses such as SDS loss and VSD loss which are essential techniques for the 3D
content generation with 2D diffusion priors. We experimentally evaluated the
proposed approach. The results show that the proposed approach can generate
high-quality details compared to the baselines.
Related papers
- 3DTopia: Large Text-to-3D Generation Model with Hybrid Diffusion Priors [85.11117452560882]
We present a two-stage text-to-3D generation system, namely 3DTopia, which generates high-quality general 3D assets within 5 minutes using hybrid diffusion priors.
The first stage samples from a 3D diffusion prior directly learned from 3D data. Specifically, it is powered by a text-conditioned tri-plane latent diffusion model, which quickly generates coarse 3D samples for fast prototyping.
The second stage utilizes 2D diffusion priors to further refine the texture of coarse 3D models from the first stage. The refinement consists of both latent and pixel space optimization for high-quality texture generation
arXiv Detail & Related papers (2024-03-04T17:26:28Z) - CAD: Photorealistic 3D Generation via Adversarial Distillation [28.07049413820128]
We propose a novel learning paradigm for 3D synthesis that utilizes pre-trained diffusion models.
Our method unlocks the generation of high-fidelity and photorealistic 3D content conditioned on a single image and prompt.
arXiv Detail & Related papers (2023-12-11T18:59:58Z) - Learn to Optimize Denoising Scores for 3D Generation: A Unified and
Improved Diffusion Prior on NeRF and 3D Gaussian Splatting [60.393072253444934]
We propose a unified framework aimed at enhancing the diffusion priors for 3D generation tasks.
We identify a divergence between the diffusion priors and the training procedures of diffusion models that substantially impairs the quality of 3D generation.
arXiv Detail & Related papers (2023-12-08T03:55:34Z) - Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D
priors [16.93758384693786]
Bidirectional Diffusion(BiDiff) is a unified framework that incorporates both a 3D and a 2D diffusion process.
Our model achieves high-quality, diverse, and scalable 3D generation.
arXiv Detail & Related papers (2023-12-07T10:00:04Z) - Sparse3D: Distilling Multiview-Consistent Diffusion for Object
Reconstruction from Sparse Views [47.215089338101066]
We present Sparse3D, a novel 3D reconstruction method tailored for sparse view inputs.
Our approach distills robust priors from a multiview-consistent diffusion model to refine a neural radiance field.
By tapping into 2D priors from powerful image diffusion models, our integrated model consistently delivers high-quality results.
arXiv Detail & Related papers (2023-08-27T11:52:00Z) - IT3D: Improved Text-to-3D Generation with Explicit View Synthesis [71.68595192524843]
This study presents a novel strategy that leverages explicitly synthesized multi-view images to address these issues.
Our approach involves the utilization of image-to-image pipelines, empowered by LDMs, to generate posed high-quality images.
For the incorporated discriminator, the synthesized multi-view images are considered real data, while the renderings of the optimized 3D models function as fake data.
arXiv Detail & Related papers (2023-08-22T14:39:17Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D
Generation [39.50894560861625]
3DFuse is a novel framework that incorporates 3D awareness into pretrained 2D diffusion models.
We introduce a training strategy that enables the 2D diffusion model learns to handle the errors and sparsity within the coarse 3D structure for robust generation.
arXiv Detail & Related papers (2023-03-14T14:24:31Z) - DreamFusion: Text-to-3D using 2D Diffusion [52.52529213936283]
Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs.
In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis.
Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.
arXiv Detail & Related papers (2022-09-29T17:50:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.