ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Process
- URL: http://arxiv.org/abs/2401.08140v2
- Date: Thu, 18 Jan 2024 07:01:15 GMT
- Title: ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Process
- Authors: Kiyohiro Nakayama, Mikaela Angelina Uy, Yang You, Ke Li, Leonidas
Guibas
- Abstract summary: Neural radiance fields (NeRFs) have gained popularity across various applications.
They face challenges in the sparse view setting, lacking sufficient constraints from volume rendering.
We introduce ProvNeRF, a model that enriches a traditional NeRF representation by incorporating per-point provenance.
- Score: 12.534255228953741
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRFs) have gained popularity across various
applications. However, they face challenges in the sparse view setting, lacking
sufficient constraints from volume rendering. Reconstructing and understanding
a 3D scene from sparse and unconstrained cameras is a long-standing problem in
classical computer vision with diverse applications. While recent works have
explored NeRFs in sparse, unconstrained view scenarios, their focus has been
primarily on enhancing reconstruction and novel view synthesis. Our approach
takes a broader perspective by posing the question: "from where has each point
been seen?" -- which gates how well we can understand and reconstruct it. In
other words, we aim to determine the origin or provenance of each 3D point and
its associated information under sparse, unconstrained views. We introduce
ProvNeRF, a model that enriches a traditional NeRF representation by
incorporating per-point provenance, modeling likely source locations for each
point. We achieve this by extending implicit maximum likelihood estimation
(IMLE) for stochastic processes. Notably, our method is compatible with any
pre-trained NeRF model and the associated training camera poses. We demonstrate
that modeling per-point provenance offers several advantages, including
uncertainty estimation, criteria-based view selection, and improved novel view
synthesis, compared to state-of-the-art methods. Please visit our project page
at https://provnerf.github.io
Related papers
- OPONeRF: One-Point-One NeRF for Robust Neural Rendering [70.56874833759241]
We propose a One-Point-One NeRF (OPONeRF) framework for robust scene rendering.
Small but unpredictable perturbations such as object movements, light changes and data contaminations broadly exist in real-life 3D scenes.
Experimental results show that our OPONeRF outperforms state-of-the-art NeRFs on various evaluation metrics.
arXiv Detail & Related papers (2024-09-30T07:49:30Z) - IOVS4NeRF:Incremental Optimal View Selection for Large-Scale NeRFs [3.9248546555042365]
This paper introduces an innovative incremental optimal view selection framework, IOVS4NeRF, designed to model a 3D scene within a restricted input budget.
By selecting views that offer the highest information gain, the quality of novel view synthesis can be enhanced with minimal additional resources.
arXiv Detail & Related papers (2024-07-26T09:11:25Z) - SG-NeRF: Neural Surface Reconstruction with Scene Graph Optimization [16.460851701725392]
We present a novel approach that optimize radiance fields with scene graphs to mitigate the influence of outlier poses.
Our method incorporates an adaptive inlier-outlier confidence estimation scheme based on scene graphs.
We also introduce an effective intersection-over-union (IoU) loss to optimize the camera pose and surface geometry.
arXiv Detail & Related papers (2024-07-17T15:50:17Z) - Invertible Neural Warp for NeRF [29.00183106905031]
This paper tackles the simultaneous optimization of pose and Neural Radiance Fields (NeRF)
We propose a novel over parameterized representation that models camera poses as learnable rigid warp functions.
We present results on synthetic and real-world datasets, and demonstrate that our approach outperforms existing baselines in terms of pose estimation and high-fidelity reconstruction.
arXiv Detail & Related papers (2024-07-17T07:14:08Z) - InterNeRF: Scaling Radiance Fields via Parameter Interpolation [36.014610797521605]
We propose InterNeRF, a novel architecture for rendering a target view using a subset of the model's parameters.
We demonstrate significant improvements in multi-room scenes while remaining competitive on standard benchmarks.
arXiv Detail & Related papers (2024-06-17T16:55:22Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - 3D Visibility-aware Generalizable Neural Radiance Fields for Interacting
Hands [51.305421495638434]
Neural radiance fields (NeRFs) are promising 3D representations for scenes, objects, and humans.
This paper proposes a generalizable visibility-aware NeRF framework for interacting hands.
Experiments on the Interhand2.6M dataset demonstrate that our proposed VA-NeRF outperforms conventional NeRFs significantly.
arXiv Detail & Related papers (2024-01-02T00:42:06Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs [56.050550636941836]
A critical obstacle preventing NeRF models from being deployed broadly in the wild is their reliance on accurate camera poses.
We propose a novel approach, LU-NeRF, that jointly estimates camera poses and neural fields with relaxed assumptions on pose configuration.
We show our LU-NeRF pipeline outperforms prior attempts at unposed NeRF without making restrictive assumptions on the pose prior.
arXiv Detail & Related papers (2023-06-08T17:56:22Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - NeRF, meet differential geometry! [10.269997499911668]
We show how differential geometry can provide regularization tools for robustly training NeRF-like models.
We show how these tools yield a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions.
arXiv Detail & Related papers (2022-06-29T22:45:34Z) - Stochastic Neural Radiance Fields:Quantifying Uncertainty in Implicit 3D
Representations [19.6329380710514]
Uncertainty quantification is a long-standing problem in Machine Learning.
We propose Neural Radiance Fields (S-NeRF), a generalization of standard NeRF that learns a probability distribution over all the possible fields modeling the scene.
S-NeRF is able to provide more reliable predictions and confidence values than generic approaches previously proposed for uncertainty estimation in other domains.
arXiv Detail & Related papers (2021-09-05T16:56:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.