FisherRF: Active View Selection and Uncertainty Quantification for
Radiance Fields using Fisher Information
- URL: http://arxiv.org/abs/2311.17874v1
- Date: Wed, 29 Nov 2023 18:20:16 GMT
- Title: FisherRF: Active View Selection and Uncertainty Quantification for
Radiance Fields using Fisher Information
- Authors: Wen Jiang, Boshu Lei, Kostas Daniilidis
- Abstract summary: This study addresses the problem of active view selection and uncertainty quantification within the domain of Radiance Fields.
NeRF have greatly advanced image rendering and reconstruction, but the limited availability of 2D images poses uncertainties.
By leveraging Fisher Information, we efficiently quantify observed information within Radiance Fields without ground truth data.
- Score: 32.66184501415286
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This study addresses the challenging problem of active view selection and
uncertainty quantification within the domain of Radiance Fields. Neural
Radiance Fields (NeRF) have greatly advanced image rendering and
reconstruction, but the limited availability of 2D images poses uncertainties
stemming from occlusions, depth ambiguities, and imaging errors. Efficiently
selecting informative views becomes crucial, and quantifying NeRF model
uncertainty presents intricate challenges. Existing approaches either depend on
model architecture or are based on assumptions regarding density distributions
that are not generally applicable. By leveraging Fisher Information, we
efficiently quantify observed information within Radiance Fields without ground
truth data. This can be used for the next best view selection and pixel-wise
uncertainty quantification. Our method overcomes existing limitations on model
architecture and effectiveness, achieving state-of-the-art results in both view
selection and uncertainty quantification, demonstrating its potential to
advance the field of Radiance Fields. Our method with the 3D Gaussian Splatting
backend could perform view selections at 70 fps.
Related papers
- Manifold Sampling for Differentiable Uncertainty in Radiance Fields [82.17927517146929]
We propose a versatile approach for learning Gaussian radiance fields with explicit and fine-grained uncertainty estimates.
We demonstrate state-of-the-art performance on next-best-view planning tasks.
arXiv Detail & Related papers (2024-09-19T11:22:20Z) - ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models [60.48305533224092]
ExtraNeRF is a novel method for extrapolating the range of views handled by a Neural Radiance Field (NeRF)
Our main idea is to leverage NeRFs to model scene-specific, fine-grained details, while capitalizing on diffusion models to extrapolate beyond our observed data.
arXiv Detail & Related papers (2024-06-10T09:44:06Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - Simple-RF: Regularizing Sparse Input Radiance Fields with Simpler Solutions [5.699788926464751]
Neural Radiance Fields (NeRF) show impressive performance in photo-realistic free-view rendering of scenes.
Recent improvements on the NeRF such as TensoRF and ZipNeRF employ explicit models for faster optimization and rendering.
We show that supervising the depth estimated by a radiance field helps train it effectively with fewer views.
arXiv Detail & Related papers (2024-04-29T18:00:25Z) - ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field [52.09661042881063]
We propose an approach that models the bfprovenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a text field.
We show that modeling per-point provenance during the NeRF optimization enriches the model with information on leading to improvements in novel view synthesis and uncertainty estimation.
arXiv Detail & Related papers (2024-01-16T06:19:18Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - PANet: Perspective-Aware Network with Dynamic Receptive Fields and
Self-Distilling Supervision for Crowd Counting [63.84828478688975]
We propose a novel perspective-aware approach called PANet to address the perspective problem.
Based on the observation that the size of the objects varies greatly in one image due to the perspective effect, we propose the dynamic receptive fields (DRF) framework.
The framework is able to adjust the receptive field by the dilated convolution parameters according to the input image, which helps the model to extract more discriminative features for each local region.
arXiv Detail & Related papers (2021-10-31T04:43:05Z) - Image Completion via Inference in Deep Generative Models [16.99337751292915]
We consider image completion from the perspective of amortized inference in an image generative model.
We demonstrate superior sample quality and diversity compared to prior art on the CIFAR-10 and FFHQ-256 datasets.
arXiv Detail & Related papers (2021-02-24T02:59:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.