DeRF: Decomposed Radiance Fields
- URL: http://arxiv.org/abs/2011.12490v1
- Date: Wed, 25 Nov 2020 02:47:16 GMT
- Title: DeRF: Decomposed Radiance Fields
- Authors: Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea
Tagliasacchi
- Abstract summary: In this paper, we propose a technique based on spatial decomposition capable of mitigating this issue.
We show that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter's Algorithm.
Our experiments show that for real-world scenes, our method provides up to 3x more efficient inference than NeRF.
- Score: 30.784481193893345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advent of Neural Radiance Fields (NeRF), neural networks can now
render novel views of a 3D scene with quality that fools the human eye. Yet,
generating these images is very computationally intensive, limiting their
applicability in practical scenarios. In this paper, we propose a technique
based on spatial decomposition capable of mitigating this issue. Our key
observation is that there are diminishing returns in employing larger (deeper
and/or wider) networks. Hence, we propose to spatially decompose a scene and
dedicate smaller networks for each decomposed part. When working together,
these networks can render the whole scene. This allows us near-constant
inference time regardless of the number of decomposed parts. Moreover, we show
that a Voronoi spatial decomposition is preferable for this purpose, as it is
provably compatible with the Painter's Algorithm for efficient and GPU-friendly
rendering. Our experiments show that for real-world scenes, our method provides
up to 3x more efficient inference than NeRF (with the same rendering quality),
or an improvement of up to 1.0~dB in PSNR (for the same inference cost).
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering [6.142272540492937]
We present TRIPS (Trilinear Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP.
Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality.
This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage.
arXiv Detail & Related papers (2024-01-11T16:06:36Z) - Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields [54.482261428543985]
Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis.
3D Gaussian splatting has shown state-of-the-art performance on real-time radiance field rendering.
We propose architectural and training changes to efficiently avert this problem.
arXiv Detail & Related papers (2023-12-06T00:46:30Z) - Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D
Scene Representations [34.836151514152746]
In this work, we investigate combining an autoencoder with a NeRF, in which latent features are rendered and then convolutionally decoded.
The resulting latent-space NeRF can produce novel views with higher quality than standard colour-space NeRFs.
We can control the tradeoff between efficiency and image quality by shrinking the AE architecture, achieving over 13 times faster rendering with only a small drop in performance.
arXiv Detail & Related papers (2023-10-27T03:52:08Z) - 3D Reconstruction with Generalizable Neural Fields using Scene Priors [71.37871576124789]
We introduce training generalizable Neural Fields incorporating scene Priors (NFPs)
The NFP network maps any single-view RGB-D image into signed distance and radiance values.
A complete scene can be reconstructed by merging individual frames in the volumetric space WITHOUT a fusion module.
arXiv Detail & Related papers (2023-09-26T18:01:02Z) - X-NeRF: Explicit Neural Radiance Field for Multi-Scene 360$^{\circ} $
Insufficient RGB-D Views [49.55319833743988]
This paper focuses on a rarely discussed but important setting: can we train one model that can represent multiple scenes?
We refer insufficient views to few extremely sparse and almost non-overlapping views.
X-NeRF, a fully explicit approach which learns a general scene completion process instead of a coordinate-based mapping, is proposed.
arXiv Detail & Related papers (2022-10-11T04:29:26Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - Efficient Neural Radiance Fields with Learned Depth-Guided Sampling [43.79307270743013]
We present a hybrid scene representation which combines the best of implicit radiance fields and explicit depth maps for efficient rendering.
Experiments show that the proposed approach exhibits state-of-the-art performance on the DTU, Real Forward-facing and NeRF Synthetic datasets.
We also demonstrate the capability of our method to synthesize free-viewpoint videos of dynamic human performers in real-time.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - TermiNeRF: Ray Termination Prediction for Efficient Neural Rendering [18.254077751772005]
Volume rendering using neural fields has shown great promise in capturing and synthesizing novel views of 3D scenes.
This type of approach requires querying the volume network at multiple points along each viewing ray in order to render an image, resulting in very slow rendering times.
We present a method that overcomes this limitation by learning a direct mapping from camera rays to locations along the ray that are most likely to influence the pixel's final appearance.
arXiv Detail & Related papers (2021-11-05T17:50:44Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.