OPONeRF: One-Point-One NeRF for Robust Neural Rendering
- URL: http://arxiv.org/abs/2409.20043v2
- Date: Thu, 10 Oct 2024 07:45:59 GMT
- Title: OPONeRF: One-Point-One NeRF for Robust Neural Rendering
- Authors: Yu Zheng, Yueqi Duan, Kangfu Zheng, Hongru Yan, Jiwen Lu, Jie Zhou,
- Abstract summary: We propose a One-Point-One NeRF (OPONeRF) framework for robust scene rendering.
Small but unpredictable perturbations such as object movements, light changes and data contaminations broadly exist in real-life 3D scenes.
Experimental results show that our OPONeRF outperforms state-of-the-art NeRFs on various evaluation metrics.
- Score: 70.56874833759241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a One-Point-One NeRF (OPONeRF) framework for robust scene rendering. Existing NeRFs are designed based on a key assumption that the target scene remains unchanged between the training and test time. However, small but unpredictable perturbations such as object movements, light changes and data contaminations broadly exist in real-life 3D scenes, which lead to significantly defective or failed rendering results even for the recent state-of-the-art generalizable methods. To address this, we propose a divide-and-conquer framework in OPONeRF that adaptively responds to local scene variations via personalizing appropriate point-wise parameters, instead of fitting a single set of NeRF parameters that are inactive to test-time unseen changes. Moreover, to explicitly capture the local uncertainty, we decompose the point representation into deterministic mapping and probabilistic inference. In this way, OPONeRF learns the sharable invariance and unsupervisedly models the unexpected scene variations between the training and testing scenes. To validate the effectiveness of the proposed method, we construct benchmarks from both realistic and synthetic data with diverse test-time perturbations including foreground motions, illumination variations and multi-modality noises, which are more challenging than conventional generalization and temporal reconstruction benchmarks. Experimental results show that our OPONeRF outperforms state-of-the-art NeRFs on various evaluation metrics through benchmark experiments and cross-scene evaluations. We further show the efficacy of the proposed method via experimenting on other existing generalization-based benchmarks and incorporating the idea of One-Point-One NeRF into other advanced baseline methods.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field [52.09661042881063]
We propose an approach that models the bfprovenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a text field.
We show that modeling per-point provenance during the NeRF optimization enriches the model with information on leading to improvements in novel view synthesis and uncertainty estimation.
arXiv Detail & Related papers (2024-01-16T06:19:18Z) - RANRAC: Robust Neural Scene Representations via Random Ray Consensus [12.161889666145127]
RANdom RAy Consensus (RANRAC) is an efficient approach to eliminate the effect of inconsistent data.
We formulate a fuzzy adaption of the RANSAC paradigm, enabling its application to large scale models.
Results indicate significant improvements compared to state-of-the-art robust methods for novel-view synthesis.
arXiv Detail & Related papers (2023-12-15T13:33:09Z) - Towards a Robust Framework for NeRF Evaluation [11.348562090906576]
We propose a new test framework which isolates the neural rendering network from the Neural Radiance Field (NeRF) pipeline.
We then perform a parametric evaluation by training and evaluating the NeRF on an explicit radiance field representation.
Our approach offers the potential to create a comparative objective evaluation framework for NeRF methods.
arXiv Detail & Related papers (2023-05-29T13:30:26Z) - Mask-Based Modeling for Neural Radiance Fields [20.728248301818912]
In this work, we unveil that 3D implicit representation learning can be significantly improved by mask-based modeling.
We propose MRVM-NeRF, which is a self-supervised pretraining target to predict complete scene representations from partially masked features along each ray.
With this pretraining target, MRVM-NeRF enables better use of correlations across different points and views as the geometry priors.
arXiv Detail & Related papers (2023-04-11T04:12:31Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Stochastic Neural Radiance Fields:Quantifying Uncertainty in Implicit 3D
Representations [19.6329380710514]
Uncertainty quantification is a long-standing problem in Machine Learning.
We propose Neural Radiance Fields (S-NeRF), a generalization of standard NeRF that learns a probability distribution over all the possible fields modeling the scene.
S-NeRF is able to provide more reliable predictions and confidence values than generic approaches previously proposed for uncertainty estimation in other domains.
arXiv Detail & Related papers (2021-09-05T16:56:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.