ActiveNeRF: Learning where to See with Uncertainty Estimation
- URL: http://arxiv.org/abs/2209.08546v1
- Date: Sun, 18 Sep 2022 12:09:15 GMT
- Title: ActiveNeRF: Learning where to See with Uncertainty Estimation
- Authors: Xuran Pan, Zihang Lai, Shiji Song, and Gao Huang
- Abstract summary: Recently, Neural Radiance Fields (NeRF) has shown promising performances on reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images.
We present a novel learning framework, ActiveNeRF, aiming to model a 3D scene with a constrained input budget.
- Score: 36.209200774203005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Neural Radiance Fields (NeRF) has shown promising performances on
reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D
images. Albeit effective, the performance of NeRF is highly influenced by the
quality of training samples. With limited posed images from the scene, NeRF
fails to generalize well to novel views and may collapse to trivial solutions
in unobserved regions. This makes NeRF impractical under resource-constrained
scenarios. In this paper, we present a novel learning framework, ActiveNeRF,
aiming to model a 3D scene with a constrained input budget. Specifically, we
first incorporate uncertainty estimation into a NeRF model, which ensures
robustness under few observations and provides an interpretation of how NeRF
understands the scene. On this basis, we propose to supplement the existing
training set with newly captured samples based on an active learning scheme. By
evaluating the reduction of uncertainty given new inputs, we select the samples
that bring the most information gain. In this way, the quality of novel view
synthesis can be improved with minimal additional resources. Extensive
experiments validate the performance of our model on both realistic and
synthetic scenes, especially with scarcer training data. Code will be released
at \url{https://github.com/LeapLabTHU/ActiveNeRF}.
Related papers
- SSNeRF: Sparse View Semi-supervised Neural Radiance Fields with Augmentation [21.454340647455236]
SSNeRF is a sparse view semi supervised NeRF method based on a teacher student framework.
Our key idea is to challenge the NeRF module with progressively severe sparse view degradation.
In this approach, the teacher NeRF generates novel views along with confidence scores, while the student NeRF, perturbed by the augmented input, learns from the high confidence pseudo labels.
arXiv Detail & Related papers (2024-08-17T09:00:37Z) - IOVS4NeRF:Incremental Optimal View Selection for Large-Scale NeRFs [3.9248546555042365]
This paper introduces an innovative incremental optimal view selection framework, IOVS4NeRF, designed to model a 3D scene within a restricted input budget.
By selecting views that offer the highest information gain, the quality of novel view synthesis can be enhanced with minimal additional resources.
arXiv Detail & Related papers (2024-07-26T09:11:25Z) - RustNeRF: Robust Neural Radiance Field with Low-Quality Images [29.289408956815727]
We present RustNeRF for real-world high-quality Neural Radiance Fields (NeRF)
To improve NeRF's robustness under real-world inputs, we train a 3D-aware preprocessing network that incorporates real-world degradation modeling.
We propose a novel implicit multi-view guidance to address information loss during image degradation and restoration.
arXiv Detail & Related papers (2024-01-06T16:54:02Z) - Re-Nerfing: Improving Novel View Synthesis through Novel View Synthesis [80.3686833921072]
Recent neural rendering and reconstruction techniques, such as NeRFs or Gaussian Splatting, have shown remarkable novel view synthesis capabilities.
With fewer images available, these methods start to fail since they can no longer correctly triangulate the underlying 3D geometry.
We propose Re-Nerfing, a simple and general add-on approach that leverages novel view synthesis itself to tackle this problem.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - Self-Evolving Neural Radiance Fields [31.124406548504794]
We propose a novel framework, dubbed Self-Evolving Neural Radiance Fields (SE-NeRF), that applies a self-training framework to neural radiance field (NeRF)
We formulate few-shot NeRF into a teacher-student framework to guide the network to learn a more robust representation of the scene.
We show and evaluate that applying our self-training framework to existing models improves the quality of the rendered images and achieves state-of-the-art performance in multiple settings.
arXiv Detail & Related papers (2023-12-02T02:28:07Z) - SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions [6.9980855647933655]
supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
arXiv Detail & Related papers (2023-09-07T18:02:57Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - StegaNeRF: Embedding Invisible Information within Neural Radiance Fields [61.653702733061785]
We present StegaNeRF, a method for steganographic information embedding in NeRF renderings.
We design an optimization framework allowing accurate hidden information extractions from images rendered by NeRF.
StegaNeRF signifies an initial exploration into the novel problem of instilling customizable, imperceptible, and recoverable information to NeRF renderings.
arXiv Detail & Related papers (2022-12-03T12:14:19Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - NeuSample: Neural Sample Field for Efficient View Synthesis [129.10351459066501]
We propose a lightweight module which names a neural sample field.
The proposed sample field maps rays into sample distributions, which can be transformed into point coordinates and fed into radiance fields for volume rendering.
We show that NeuSample achieves better rendering quality than NeRF while enjoying a faster inference speed.
arXiv Detail & Related papers (2021-11-30T16:43:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.