RGB-D Neural Radiance Fields: Local Sampling for Faster Training
- URL: http://arxiv.org/abs/2203.15587v1
- Date: Sat, 26 Mar 2022 11:31:35 GMT
- Title: RGB-D Neural Radiance Fields: Local Sampling for Faster Training
- Authors: Arnab Dey and Andrew I. Comport
- Abstract summary: Recent advances in implicit neural representation from images using neural radiance fields(NeRF) have shown promising results.
Some of the limitations of previous NeRF based methods include longer training time, and inaccurate underlying geometry.
This paper proposes a depth-guided local sampling strategy and a smaller neural network architecture to achieve faster training time without compromising quality.
- Score: 0.8223798883838329
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning a 3D representation of a scene has been a challenging problem for
decades in computer vision. Recent advances in implicit neural representation
from images using neural radiance fields(NeRF) have shown promising results.
Some of the limitations of previous NeRF based methods include longer training
time, and inaccurate underlying geometry. The proposed method takes advantage
of RGB-D data to reduce training time by leveraging depth sensing to improve
local sampling. This paper proposes a depth-guided local sampling strategy and
a smaller neural network architecture to achieve faster training time without
compromising quality.
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - In Search of a Data Transformation That Accelerates Neural Field Training [37.39915075581319]
We focus on how permuting pixel locations affect the convergence speed of SGD.
Counterly, we find that randomly permuting the pixel locations can considerably accelerate the training.
Our analyses suggest that the random pixel permutations remove the easy-to-fit patterns, which hinder easy optimization in the early stage but capture fine details of the signal.
arXiv Detail & Related papers (2023-11-28T06:17:49Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - EfficientNeRF: Efficient Neural Radiance Fields [63.76830521051605]
We present EfficientNeRF as an efficient NeRF-based method to represent 3D scene and synthesize novel-view images.
Our method can reduce over 88% of training time, reach rendering speed of over 200 FPS, while still achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-02T05:36:44Z) - Mip-NeRF RGB-D: Depth Assisted Fast Neural Radiance Fields [0.696125353550498]
Neural scene representations, such as neural radiance fields (NeRF), are based on training a multilayer perceptron (MLP) using a set of color images with known poses.
An increasing number of devices now produce RGB-D information, which has been shown to be very important for a wide range of tasks.
This paper investigates what improvements can be made to these promising implicit representations by incorporating depth information with the color images.
arXiv Detail & Related papers (2022-05-19T07:11:42Z) - DDNeRF: Depth Distribution Neural Radiance Fields [12.283891012446647]
Deep distribution neural radiance field (DDNeRF) is a new method that significantly increases sampling efficiency along rays during training.
We train a coarse model to predict the internal distribution of the transparency of an input volume in addition to the volume's total density.
This finer distribution then guides the sampling procedure of the fine model.
arXiv Detail & Related papers (2022-03-30T19:21:07Z) - Neural Adaptive SCEne Tracing [24.781844909539686]
We present NAScenT, the first neural rendering method based on directly training a hybrid explicit-implicit neural representation.
NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments.
arXiv Detail & Related papers (2022-02-28T10:27:23Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.