Where and How: Mitigating Confusion in Neural Radiance Fields from
Sparse Inputs
- URL: http://arxiv.org/abs/2308.02908v1
- Date: Sat, 5 Aug 2023 15:59:15 GMT
- Title: Where and How: Mitigating Confusion in Neural Radiance Fields from
Sparse Inputs
- Authors: Yanqi Bao, Yuxin Li, Jing Huo, Tianyu Ding, Xinyue Liang, Wenbin Li
and Yang Gao
- Abstract summary: We present a novel learning framework, WaH-NeRF, which effectively mitigates confusion by tackling the following challenges.
We propose a Semi-Supervised NeRF learning Paradigm based on pose perturbation and a Pixel-Patch Correspondence Loss to alleviate prediction confusion.
- Score: 22.9859132310377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields from Sparse input} (NeRF-S) have shown great potential
in synthesizing novel views with a limited number of observed viewpoints.
However, due to the inherent limitations of sparse inputs and the gap between
non-adjacent views, rendering results often suffer from over-fitting and foggy
surfaces, a phenomenon we refer to as "CONFUSION" during volume rendering. In
this paper, we analyze the root cause of this confusion and attribute it to two
fundamental questions: "WHERE" and "HOW". To this end, we present a novel
learning framework, WaH-NeRF, which effectively mitigates confusion by tackling
the following challenges: (i)"WHERE" to Sample? in NeRF-S -- we introduce a
Deformable Sampling strategy and a Weight-based Mutual Information Loss to
address sample-position confusion arising from the limited number of
viewpoints; and (ii) "HOW" to Predict? in NeRF-S -- we propose a
Semi-Supervised NeRF learning Paradigm based on pose perturbation and a
Pixel-Patch Correspondence Loss to alleviate prediction confusion caused by the
disparity between training and testing viewpoints. By integrating our proposed
modules and loss functions, WaH-NeRF outperforms previous methods under the
NeRF-S setting. Code is available https://github.com/bbbbby-99/WaH-NeRF.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - OPONeRF: One-Point-One NeRF for Robust Neural Rendering [70.56874833759241]
We propose a One-Point-One NeRF (OPONeRF) framework for robust scene rendering.
Small but unpredictable perturbations such as object movements, light changes and data contaminations broadly exist in real-life 3D scenes.
Experimental results show that our OPONeRF outperforms state-of-the-art NeRFs on various evaluation metrics.
arXiv Detail & Related papers (2024-09-30T07:49:30Z) - NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds [60.1382112938132]
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views.
arXiv Detail & Related papers (2023-04-13T06:40:08Z) - FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization [32.1581416980828]
We present Frequency regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms previous methods.
We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training.
arXiv Detail & Related papers (2023-03-13T18:59:03Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - Exact-NeRF: An Exploration of a Precise Volumetric Parameterization for
Neural Radiance Fields [16.870604081967866]
This paper contributes the first approach to offer a precise analytical solution to the mip-NeRF approximation.
We show that such an exact formulation Exact-NeRF matches the accuracy of mip-NeRF and furthermore provides a natural extension to more challenging scenarios without further modification.
Our contribution aims to both address the hitherto unexplored issues of frustum approximation in earlier NeRF work and additionally provide insight into the potential future consideration of analytical solutions in future NeRF extensions.
arXiv Detail & Related papers (2022-11-22T13:56:33Z) - ActiveNeRF: Learning where to See with Uncertainty Estimation [36.209200774203005]
Recently, Neural Radiance Fields (NeRF) has shown promising performances on reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images.
We present a novel learning framework, ActiveNeRF, aiming to model a 3D scene with a constrained input budget.
arXiv Detail & Related papers (2022-09-18T12:09:15Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - Sampling-free obstacle gradients and reactive planning in Neural
Radiance Fields (NeRF) [43.33810082237658]
We show that by adding the capacity to infer occupancy in a radius to a pre-trained NeRF, we are effectively learning an approximation to a Euclidean Signed Distance Field (ESDF)
Our findings allow for very fast sampling-free obstacle avoidance planning in the implicit representation.
arXiv Detail & Related papers (2022-05-03T09:32:02Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.