FisherRF: Active View Selection and Uncertainty Quantification for
Radiance Fields using Fisher Information
- URL: http://arxiv.org/abs/2311.17874v1
- Date: Wed, 29 Nov 2023 18:20:16 GMT
- Title: FisherRF: Active View Selection and Uncertainty Quantification for
Radiance Fields using Fisher Information
- Authors: Wen Jiang, Boshu Lei, Kostas Daniilidis
- Abstract summary: This study addresses the problem of active view selection and uncertainty quantification within the domain of Radiance Fields.
NeRF have greatly advanced image rendering and reconstruction, but the limited availability of 2D images poses uncertainties.
By leveraging Fisher Information, we efficiently quantify observed information within Radiance Fields without ground truth data.
- Score: 32.66184501415286
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This study addresses the challenging problem of active view selection and
uncertainty quantification within the domain of Radiance Fields. Neural
Radiance Fields (NeRF) have greatly advanced image rendering and
reconstruction, but the limited availability of 2D images poses uncertainties
stemming from occlusions, depth ambiguities, and imaging errors. Efficiently
selecting informative views becomes crucial, and quantifying NeRF model
uncertainty presents intricate challenges. Existing approaches either depend on
model architecture or are based on assumptions regarding density distributions
that are not generally applicable. By leveraging Fisher Information, we
efficiently quantify observed information within Radiance Fields without ground
truth data. This can be used for the next best view selection and pixel-wise
uncertainty quantification. Our method overcomes existing limitations on model
architecture and effectiveness, achieving state-of-the-art results in both view
selection and uncertainty quantification, demonstrating its potential to
advance the field of Radiance Fields. Our method with the 3D Gaussian Splatting
backend could perform view selections at 70 fps.
Related papers
- Manifold Sampling for Differentiable Uncertainty in Radiance Fields [82.17927517146929]
We propose a versatile approach for learning Gaussian radiance fields with explicit and fine-grained uncertainty estimates.
We demonstrate state-of-the-art performance on next-best-view planning tasks.
arXiv Detail & Related papers (2024-09-19T11:22:20Z) - Sparse-DeRF: Deblurred Neural Radiance Fields from Sparse View [17.214047499850487]
This paper focuses on constructing deblurred neural radiance fields (DeRF) from sparse-view for more pragmatic real-world scenarios.
Sparse-DeRF successfully regularizes the complicated joint optimization, presenting alleviated overfitting artifacts and enhanced quality on radiance fields.
We demonstrate the effectiveness of the Sparse-DeRF with extensive quantitative and qualitative experimental results by training DeRF from 2-view, 4-view, and 6-view blurry images.
arXiv Detail & Related papers (2024-07-09T07:36:54Z) - ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models [60.48305533224092]
ExtraNeRF is a novel method for extrapolating the range of views handled by a Neural Radiance Field (NeRF)
Our main idea is to leverage NeRFs to model scene-specific, fine-grained details, while capitalizing on diffusion models to extrapolate beyond our observed data.
arXiv Detail & Related papers (2024-06-10T09:44:06Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - NeISF: Neural Incident Stokes Field for Geometry and Material Estimation [50.588983686271284]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints.
We propose Neural Incident Stokes Fields (NeISF), a multi-view inverse framework that reduces ambiguities using polarization cues.
arXiv Detail & Related papers (2023-11-22T06:28:30Z) - Estimating 3D Uncertainty Field: Quantifying Uncertainty for Neural
Radiance Fields [25.300284510832974]
We propose a novel approach to estimate a 3D Uncertainty Field based on the learned incomplete scene geometry.
By considering the accumulated transmittance along each camera ray, our Uncertainty Field infers 2D pixel-wise uncertainty.
Our experiments demonstrate that our approach is the only one that can explicitly reason about high uncertainty both on 3D unseen regions and its involved 2D rendered pixels.
arXiv Detail & Related papers (2023-11-03T09:47:53Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in
Neural Radiance Fields [7.380217868660371]
We show that ensembling effectively quantifies model uncertainty in Neural Radiance Fields (NeRFs)
We demonstrate that NeRF uncertainty can be utilised for next-best view selection and model refinement.
arXiv Detail & Related papers (2022-09-19T02:28:33Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.