Towards a Robust Framework for NeRF Evaluation
- URL: http://arxiv.org/abs/2305.18079v3
- Date: Wed, 31 May 2023 18:52:22 GMT
- Title: Towards a Robust Framework for NeRF Evaluation
- Authors: Adrian Azzarelli, Nantheera Anantrasirichai, David R Bull
- Abstract summary: We propose a new test framework which isolates the neural rendering network from the Neural Radiance Field (NeRF) pipeline.
We then perform a parametric evaluation by training and evaluating the NeRF on an explicit radiance field representation.
Our approach offers the potential to create a comparative objective evaluation framework for NeRF methods.
- Score: 11.348562090906576
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Neural Radiance Field (NeRF) research has attracted significant attention
recently, with 3D modelling, virtual/augmented reality, and visual effects
driving its application. While current NeRF implementations can produce high
quality visual results, there is a conspicuous lack of reliable methods for
evaluating them. Conventional image quality assessment methods and analytical
metrics (e.g. PSNR, SSIM, LPIPS etc.) only provide approximate indicators of
performance since they generalise the ability of the entire NeRF pipeline.
Hence, in this paper, we propose a new test framework which isolates the neural
rendering network from the NeRF pipeline and then performs a parametric
evaluation by training and evaluating the NeRF on an explicit radiance field
representation. We also introduce a configurable approach for generating
representations specifically for evaluation purposes. This employs ray-casting
to transform mesh models into explicit NeRF samples, as well as to "shade"
these representations. Combining these two approaches, we demonstrate how
different "tasks" (scenes with different visual effects or learning strategies)
and types of networks (NeRFs and depth-wise implicit neural representations
(INRs)) can be evaluated within this framework. Additionally, we propose a
novel metric to measure task complexity of the framework which accounts for the
visual parameters and the distribution of the spatial data. Our approach offers
the potential to create a comparative objective evaluation framework for NeRF
methods.
Related papers
- OPONeRF: One-Point-One NeRF for Robust Neural Rendering [70.56874833759241]
We propose a One-Point-One NeRF (OPONeRF) framework for robust scene rendering.
Small but unpredictable perturbations such as object movements, light changes and data contaminations broadly exist in real-life 3D scenes.
Experimental results show that our OPONeRF outperforms state-of-the-art NeRFs on various evaluation metrics.
arXiv Detail & Related papers (2024-09-30T07:49:30Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - 3D Visibility-aware Generalizable Neural Radiance Fields for Interacting
Hands [51.305421495638434]
Neural radiance fields (NeRFs) are promising 3D representations for scenes, objects, and humans.
This paper proposes a generalizable visibility-aware NeRF framework for interacting hands.
Experiments on the Interhand2.6M dataset demonstrate that our proposed VA-NeRF outperforms conventional NeRFs significantly.
arXiv Detail & Related papers (2024-01-02T00:42:06Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - Analyzing the Internals of Neural Radiance Fields [4.681790910494339]
We analyze large, trained ReLU-MLPs used in coarse-to-fine sampling.
We show how these large minima activations can be accelerated by transforming intermediate activations to a weight estimate.
arXiv Detail & Related papers (2023-06-01T14:06:48Z) - Mask-Based Modeling for Neural Radiance Fields [20.728248301818912]
In this work, we unveil that 3D implicit representation learning can be significantly improved by mask-based modeling.
We propose MRVM-NeRF, which is a self-supervised pretraining target to predict complete scene representations from partially masked features along each ray.
With this pretraining target, MRVM-NeRF enables better use of correlations across different points and views as the geometry priors.
arXiv Detail & Related papers (2023-04-11T04:12:31Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.