DiffRF: Rendering-Guided 3D Radiance Field Diffusion
- URL: http://arxiv.org/abs/2212.01206v2
- Date: Mon, 27 Mar 2023 14:51:07 GMT
- Title: DiffRF: Rendering-Guided 3D Radiance Field Diffusion
- Authors: Norman M\"uller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bul\`o,
Peter Kontschieder, Matthias Nie{\ss}ner
- Abstract summary: We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models.
In contrast to 2D-diffusion models, our model learns multi-view consistent priors, enabling free-view synthesis and accurate shape generation.
- Score: 18.20324411024166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DiffRF, a novel approach for 3D radiance field synthesis based
on denoising diffusion probabilistic models. While existing diffusion-based
methods operate on images, latent codes, or point cloud data, we are the first
to directly generate volumetric radiance fields. To this end, we propose a 3D
denoising model which directly operates on an explicit voxel grid
representation. However, as radiance fields generated from a set of posed
images can be ambiguous and contain artifacts, obtaining ground truth radiance
field samples is non-trivial. We address this challenge by pairing the
denoising formulation with a rendering loss, enabling our model to learn a
deviated prior that favours good image quality instead of trying to replicate
fitting errors like floating artifacts. In contrast to 2D-diffusion models, our
model learns multi-view consistent priors, enabling free-view synthesis and
accurate shape generation. Compared to 3D GANs, our diffusion-based approach
naturally enables conditional generation such as masked completion or
single-view 3D synthesis at inference time.
Related papers
- Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - GD^2-NeRF: Generative Detail Compensation via GAN and Diffusion for One-shot Generalizable Neural Radiance Fields [41.63632669921749]
We propose a Generative Detail compensation framework via GAN and Diffusion.
The framework is both inference-time finetuning-free and with vivid plausible details.
Experiments on both the synthetic and real-world datasets show that GD$2$-NeRF noticeably improves the details while without per-scene finetuning.
arXiv Detail & Related papers (2024-01-01T00:08:39Z) - CAD: Photorealistic 3D Generation via Adversarial Distillation [28.07049413820128]
We propose a novel learning paradigm for 3D synthesis that utilizes pre-trained diffusion models.
Our method unlocks the generation of high-fidelity and photorealistic 3D content conditioned on a single image and prompt.
arXiv Detail & Related papers (2023-12-11T18:59:58Z) - Diffusion with Forward Models: Solving Stochastic Inverse Problems
Without Direct Supervision [76.32860119056964]
We propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed.
We demonstrate the effectiveness of our method on three challenging computer vision tasks.
arXiv Detail & Related papers (2023-06-20T17:53:00Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - DreamFusion: Text-to-3D using 2D Diffusion [52.52529213936283]
Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs.
In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis.
Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.
arXiv Detail & Related papers (2022-09-29T17:50:40Z) - GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis [43.4859484191223]
We propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene.
By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone.
arXiv Detail & Related papers (2020-07-05T20:37:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.