Self-Evolving Neural Radiance Fields
- URL: http://arxiv.org/abs/2312.01003v2
- Date: Tue, 5 Dec 2023 12:26:16 GMT
- Title: Self-Evolving Neural Radiance Fields
- Authors: Jaewoo Jung, Jisang Han, Jiwon Kang, Seongchan Kim, Min-Seop Kwak,
Seungryong Kim
- Abstract summary: We propose a novel framework, dubbed Self-Evolving Neural Radiance Fields (SE-NeRF), that applies a self-training framework to neural radiance field (NeRF)
We formulate few-shot NeRF into a teacher-student framework to guide the network to learn a more robust representation of the scene.
We show and evaluate that applying our self-training framework to existing models improves the quality of the rendered images and achieves state-of-the-art performance in multiple settings.
- Score: 31.124406548504794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, neural radiance field (NeRF) has shown remarkable performance in
novel view synthesis and 3D reconstruction. However, it still requires abundant
high-quality images, limiting its applicability in real-world scenarios. To
overcome this limitation, recent works have focused on training NeRF only with
sparse viewpoints by giving additional regularizations, often called few-shot
NeRF. We observe that due to the under-constrained nature of the task, solely
using additional regularization is not enough to prevent the model from
overfitting to sparse viewpoints. In this paper, we propose a novel framework,
dubbed Self-Evolving Neural Radiance Fields (SE-NeRF), that applies a
self-training framework to NeRF to address these problems. We formulate
few-shot NeRF into a teacher-student framework to guide the network to learn a
more robust representation of the scene by training the student with additional
pseudo labels generated from the teacher. By distilling ray-level pseudo labels
using distinct distillation schemes for reliable and unreliable rays obtained
with our novel reliability estimation method, we enable NeRF to learn a more
accurate and robust geometry of the 3D scene. We show and evaluate that
applying our self-training framework to existing models improves the quality of
the rendered images and achieves state-of-the-art performance in multiple
settings.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - RustNeRF: Robust Neural Radiance Field with Low-Quality Images [29.289408956815727]
We present RustNeRF for real-world high-quality Neural Radiance Fields (NeRF)
To improve NeRF's robustness under real-world inputs, we train a 3D-aware preprocessing network that incorporates real-world degradation modeling.
We propose a novel implicit multi-view guidance to address information loss during image degradation and restoration.
arXiv Detail & Related papers (2024-01-06T16:54:02Z) - From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm [57.73868344064043]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer.
We also present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer.
NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
arXiv Detail & Related papers (2023-06-10T09:19:19Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - NeRFLiX: High-Quality Neural View Synthesis by Learning a
Degradation-Driven Inter-viewpoint MiXer [44.220611552133036]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm by learning a degradation-driven inter-viewpoint mixer.
We also propose an inter-viewpoint aggregation framework that is able to fuse highly related high-quality training images.
arXiv Detail & Related papers (2023-03-13T08:36:30Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency [31.22435282922934]
We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization.
We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.
arXiv Detail & Related papers (2023-01-26T05:14:12Z) - ActiveNeRF: Learning where to See with Uncertainty Estimation [36.209200774203005]
Recently, Neural Radiance Fields (NeRF) has shown promising performances on reconstructing 3D scenes and synthesizing novel views from a sparse set of 2D images.
We present a novel learning framework, ActiveNeRF, aiming to model a 3D scene with a constrained input budget.
arXiv Detail & Related papers (2022-09-18T12:09:15Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.