GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields
- URL: http://arxiv.org/abs/2306.06044v2
- Date: Mon, 18 Sep 2023 10:22:51 GMT
- Title: GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields
- Authors: Barbara Roessle, Norman M\"uller, Lorenzo Porzi, Samuel Rota Bul\`o,
Peter Kontschieder, Matthias Nie{\ss}ner
- Abstract summary: We take advantage of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs.
We learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction.
rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints.
- Score: 12.92658687936068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) have shown impressive novel view synthesis
results; nonetheless, even thorough recordings yield imperfections in
reconstructions, for instance due to poorly observed areas or minor lighting
changes. Our goal is to mitigate these imperfections from various sources with
a joint solution: we take advantage of the ability of generative adversarial
networks (GANs) to produce realistic images and use them to enhance realism in
3D scene reconstruction with NeRFs. To this end, we learn the patch
distribution of a scene using an adversarial discriminator, which provides
feedback to the radiance field reconstruction, thus improving realism in a
3D-consistent fashion. Thereby, rendering artifacts are repaired directly in
the underlying 3D representation by imposing multi-view path rendering
constraints. In addition, we condition a generator with multi-resolution NeRF
renderings which is adversarially trained to further improve rendering quality.
We demonstrate that our approach significantly improves rendering quality,
e.g., nearly halving LPIPS scores compared to Nerfacto while at the same time
improving PSNR by 1.4dB on the advanced indoor scenes of Tanks and Temples.
Related papers
- Drantal-NeRF: Diffusion-Based Restoration for Anti-aliasing Neural Radiance Field [10.225323718645022]
Aliasing artifacts in renderings produced by Neural Radiance Field (NeRF) is a long-standing but complex issue.
We present a Diffusion-based restoration method for anti-aliasing Neural Radiance Field (Drantal-NeRF)
arXiv Detail & Related papers (2024-07-10T08:32:13Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - RaFE: Generative Radiance Fields Restoration [38.602849644666165]
NeRF (Neural Radiance Fields) has demonstrated tremendous potential in novel view synthesis and 3D reconstruction.
Previous methods for NeRF restoration are tailored for specific degradation type, ignoring the generality of restoration.
We propose a generic radiance fields restoration pipeline, named RaFE, which applies to various types of degradations.
arXiv Detail & Related papers (2024-04-04T17:59:50Z) - Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - RoGUENeRF: A Robust Geometry-Consistent Universal Enhancer for NeRF [1.828790674925926]
2D enhancers can be pre-trained to recover some detail but are agnostic to scene geometry.
Existing 3D enhancers are able to transfer detail from nearby training images in a generalizable manner.
We propose a neural rendering enhancer, RoGUENeRF, which exploits the best of both paradigms.
arXiv Detail & Related papers (2024-03-18T16:11:42Z) - Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D
Scene Representations [34.836151514152746]
In this work, we investigate combining an autoencoder with a NeRF, in which latent features are rendered and then convolutionally decoded.
The resulting latent-space NeRF can produce novel views with higher quality than standard colour-space NeRFs.
We can control the tradeoff between efficiency and image quality by shrinking the AE architecture, achieving over 13 times faster rendering with only a small drop in performance.
arXiv Detail & Related papers (2023-10-27T03:52:08Z) - PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene
Reconstruction from Blurry Images [75.87721926918874]
We present Progressively Deblurring Radiance Field (PDRF)
PDRF is a novel approach to efficiently reconstruct high quality radiance fields from blurry images.
We show that PDRF is 15X faster than previous State-of-The-Art scene reconstruction methods.
arXiv Detail & Related papers (2022-08-17T03:42:29Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.