Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D
Scene Representations
- URL: http://arxiv.org/abs/2310.17880v1
- Date: Fri, 27 Oct 2023 03:52:08 GMT
- Title: Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D
Scene Representations
- Authors: Tristan Aumentado-Armstrong, Ashkan Mirzaei, Marcus A. Brubaker,
Jonathan Kelly, Alex Levinshtein, Konstantinos G. Derpanis, Igor
Gilitschenski
- Abstract summary: In this work, we investigate combining an autoencoder with a NeRF, in which latent features are rendered and then convolutionally decoded.
The resulting latent-space NeRF can produce novel views with higher quality than standard colour-space NeRFs.
We can control the tradeoff between efficiency and image quality by shrinking the AE architecture, achieving over 13 times faster rendering with only a small drop in performance.
- Score: 34.836151514152746
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural Radiance Fields (NeRFs) have proven to be powerful 3D representations,
capable of high quality novel view synthesis of complex scenes. While NeRFs
have been applied to graphics, vision, and robotics, problems with slow
rendering speed and characteristic visual artifacts prevent adoption in many
use cases. In this work, we investigate combining an autoencoder (AE) with a
NeRF, in which latent features (instead of colours) are rendered and then
convolutionally decoded. The resulting latent-space NeRF can produce novel
views with higher quality than standard colour-space NeRFs, as the AE can
correct certain visual artifacts, while rendering over three times faster. Our
work is orthogonal to other techniques for improving NeRF efficiency. Further,
we can control the tradeoff between efficiency and image quality by shrinking
the AE architecture, achieving over 13 times faster rendering with only a small
drop in performance. We hope that our approach can form the basis of an
efficient, yet high-fidelity, 3D scene representation for downstream tasks,
especially when retaining differentiability is useful, as in many robotics
scenarios requiring continual learning.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields [12.92658687936068]
We take advantage of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs.
We learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction.
rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints.
arXiv Detail & Related papers (2023-06-09T17:12:35Z) - Enhance-NeRF: Multiple Performance Evaluation for Neural Radiance Fields [2.5432277893532116]
Neural Radiance Fields (NeRF) can generate realistic images from any viewpoint.
NeRF-based models are susceptible to interference issues caused by colored "fog" noise.
Our approach, coined Enhance-NeRF, adopts joint color to balance low and high reflectivity objects display.
arXiv Detail & Related papers (2023-06-08T15:49:30Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields [99.57774680640581]
We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
arXiv Detail & Related papers (2022-10-28T07:11:05Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [65.82222842213577]
We propose a novel neural rendering pipeline, which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality.
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface.
We then leverage the encoded information on the UV manifold to construct a 3D volumetric representation.
arXiv Detail & Related papers (2021-12-19T17:34:15Z) - PlenOctrees for Real-time Rendering of Neural Radiance Fields [35.58442869498845]
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation.
Our method can render 800x800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs.
arXiv Detail & Related papers (2021-03-25T17:59:06Z) - FastNeRF: High-Fidelity Neural Rendering at 200FPS [17.722927021159393]
We propose FastNeRF, a system capable of rendering high fidelity images at 200Hz on a high-end consumer GPU.
The proposed method is 3000 times faster than the original NeRF algorithm and at least an order of magnitude faster than existing work on accelerating NeRF.
arXiv Detail & Related papers (2021-03-18T17:09:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.