Efficient Ray Sampling for Radiance Fields Reconstruction
- URL: http://arxiv.org/abs/2308.15547v1
- Date: Tue, 29 Aug 2023 18:11:32 GMT
- Title: Efficient Ray Sampling for Radiance Fields Reconstruction
- Authors: Shilei Sun, Ming Liu, Zhongyi Fan, Yuxue Liu, Chengwei Lv, Liquan
Dong, Lingqin Kong (Beijing Institute of Technology, China)
- Abstract summary: ray sampling strategy profoundly impacts network convergence.
We propose a novel ray sampling approach for neural radiance fields.
Our method significantly outperforms state-of-the-art techniques on public benchmark datasets.
- Score: 4.004168836949491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accelerating neural radiance fields training is of substantial practical
value, as the ray sampling strategy profoundly impacts network convergence.
More efficient ray sampling can thus directly enhance existing NeRF models'
training efficiency. We therefore propose a novel ray sampling approach for
neural radiance fields that improves training efficiency while retaining
photorealistic rendering results. First, we analyze the relationship between
the pixel loss distribution of sampled rays and rendering quality. This reveals
redundancy in the original NeRF's uniform ray sampling. Guided by this finding,
we develop a sampling method leveraging pixel regions and depth boundaries. Our
main idea is to sample fewer rays in training views, yet with each ray more
informative for scene fitting. Sampling probability increases in pixel areas
exhibiting significant color and depth variation, greatly reducing wasteful
rays from other regions without sacrificing precision. Through this method, not
only can the convergence of the network be accelerated, but the spatial
geometry of a scene can also be perceived more accurately. Rendering outputs
are enhanced, especially for texture-complex regions. Experiments demonstrate
that our method significantly outperforms state-of-the-art techniques on public
benchmark datasets.
Related papers
- ProNeRF: Learning Efficient Projection-Aware Ray Sampling for
Fine-Grained Implicit Neural Radiance Fields [27.008124938806944]
We propose ProNeRF, which provides an optimal trade-off between memory footprint (similar to NeRF), speed (faster than HyperReel), and quality (better than K-Planes)
Our ProNeRF yields state-of-the-art metrics, being 15-23x faster with 0.65dB higher PSNR than NeRF and yielding 0.95dB higher PSNR than the best published sampler-based method, HyperReel.
arXiv Detail & Related papers (2023-12-13T13:37:32Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - Differentiable Rendering with Reparameterized Volume Sampling [2.717399369766309]
In view synthesis, a neural radiance field approximates underlying density and radiance fields based on a sparse set of scene pictures.
This rendering algorithm is fully differentiable and facilitates gradient-based optimization of the fields.
We propose a simple end-to-end differentiable sampling algorithm based on inverse transform sampling.
arXiv Detail & Related papers (2023-02-21T19:56:50Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - DDNeRF: Depth Distribution Neural Radiance Fields [12.283891012446647]
Deep distribution neural radiance field (DDNeRF) is a new method that significantly increases sampling efficiency along rays during training.
We train a coarse model to predict the internal distribution of the transparency of an input volume in addition to the volume's total density.
This finer distribution then guides the sampling procedure of the fine model.
arXiv Detail & Related papers (2022-03-30T19:21:07Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - Photon-Driven Neural Path Guiding [102.12596782286607]
We present a novel neural path guiding approach that can reconstruct high-quality sampling distributions for path guiding from a sparse set of samples.
We leverage photons traced from light sources as the input for sampling density reconstruction, which is highly effective for challenging scenes with strong global illumination.
Our approach achieves significantly better rendering results of testing scenes than previous state-of-the-art path guiding methods.
arXiv Detail & Related papers (2020-10-05T04:54:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.