GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
- URL: http://arxiv.org/abs/2007.02442v4
- Date: Tue, 30 Mar 2021 11:33:35 GMT
- Title: GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
- Authors: Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger
- Abstract summary: We propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene.
By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone.
- Score: 43.4859484191223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While 2D generative adversarial networks have enabled high-resolution image
synthesis, they largely lack an understanding of the 3D world and the image
formation process. Thus, they do not provide precise control over camera
viewpoint or object pose. To address this problem, several recent approaches
leverage intermediate voxel-based representations in combination with
differentiable rendering. However, existing methods either produce low image
resolution or fall short in disentangling camera and scene properties, e.g.,
the object identity may vary with the viewpoint. In this paper, we propose a
generative model for radiance fields which have recently proven successful for
novel view synthesis of a single scene. In contrast to voxel-based
representations, radiance fields are not confined to a coarse discretization of
the 3D space, yet allow for disentangling camera and scene properties while
degrading gracefully in the presence of reconstruction ambiguity. By
introducing a multi-scale patch-based discriminator, we demonstrate synthesis
of high-resolution images while training our model from unposed 2D images
alone. We systematically analyze our approach on several challenging synthetic
and real-world datasets. Our experiments reveal that radiance fields are a
powerful representation for generative image synthesis, leading to 3D
consistent models that render with high fidelity.
Related papers
- Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - Generative Novel View Synthesis with 3D-Aware Diffusion Models [96.78397108732233]
We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image.
Our method makes use of existing 2D diffusion backbones but, crucially, incorporates geometry priors in the form of a 3D feature volume.
In addition to generating novel views, our method has the ability to autoregressively synthesize 3D-consistent sequences.
arXiv Detail & Related papers (2023-04-05T17:15:47Z) - DiffRF: Rendering-Guided 3D Radiance Field Diffusion [18.20324411024166]
We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models.
In contrast to 2D-diffusion models, our model learns multi-view consistent priors, enabling free-view synthesis and accurate shape generation.
arXiv Detail & Related papers (2022-12-02T14:37:20Z) - Learning Detailed Radiance Manifolds for High-Fidelity and 3D-Consistent
Portrait Synthesis from Monocular Image [17.742602375370407]
A key challenge for novel view synthesis of monocular portrait images is 3D consistency under continuous pose variations.
We present a 3D-consistent novel view synthesis approach for monocular portrait images based on a proposed 3D-aware GAN.
arXiv Detail & Related papers (2022-11-25T05:20:04Z) - Generative Deformable Radiance Fields for Disentangled Image Synthesis
of Topology-Varying Objects [52.46838926521572]
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images.
We propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations.
arXiv Detail & Related papers (2022-09-09T08:44:06Z) - 3D-aware Image Synthesis via Learning Structural and Textural
Representations [39.681030539374994]
We propose VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation.
Our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
arXiv Detail & Related papers (2021-12-20T18:59:40Z) - StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image
Synthesis [92.25145204543904]
StyleNeRF is a 3D-aware generative model for high-resolution image synthesis with high multi-view consistency.
It integrates the neural radiance field (NeRF) into a style-based generator.
It can synthesize high-resolution images at interactive rates while preserving 3D consistency at high quality.
arXiv Detail & Related papers (2021-10-18T02:37:01Z) - Towards Realistic 3D Embedding via View Alignment [53.89445873577063]
This paper presents an innovative View Alignment GAN (VA-GAN) that composes new images by embedding 3D models into 2D background images realistically and automatically.
VA-GAN consists of a texture generator and a differential discriminator that are inter-connected and end-to-end trainable.
arXiv Detail & Related papers (2020-07-14T14:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.