From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm
- URL: http://arxiv.org/abs/2306.06388v3
- Date: Wed, 13 Dec 2023 07:31:59 GMT
- Title: From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm
- Authors: Kun Zhou, Wenbo Li, Nianjuan Jiang, Xiaoguang Han, Jiangbo Lu
- Abstract summary: We propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer.
We also present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer.
NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
- Score: 57.73868344064043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRF) have shown great success in novel view
synthesis. However, recovering high-quality details from real-world scenes is
still challenging for the existing NeRF-based approaches, due to the potential
imperfect calibration information and scene representation inaccuracy. Even
with high-quality training frames, the synthetic novel views produced by NeRF
models still suffer from notable rendering artifacts, such as noise and blur.
To address this, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm
that learns a degradation-driven inter-viewpoint mixer. Specially, we design a
NeRF-style degradation modeling approach and construct large-scale training
data, enabling the possibility of effectively removing NeRF-native rendering
artifacts for deep neural networks. Moreover, beyond the degradation removal,
we propose an inter-viewpoint aggregation framework that fuses highly related
high-quality training images, pushing the performance of cutting-edge NeRF
models to entirely new levels and producing highly photo-realistic synthetic
views. Based on this paradigm, we further present NeRFLiX++ with a stronger
two-stage NeRF degradation simulator and a faster inter-viewpoint mixer,
achieving superior performance with significantly improved computational
efficiency. Notably, NeRFLiX++ is capable of restoring photo-realistic
ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
Extensive experiments demonstrate the excellent restoration ability of
NeRFLiX++ on various novel view synthesis benchmarks.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - RustNeRF: Robust Neural Radiance Field with Low-Quality Images [29.289408956815727]
We present RustNeRF for real-world high-quality Neural Radiance Fields (NeRF)
To improve NeRF's robustness under real-world inputs, we train a 3D-aware preprocessing network that incorporates real-world degradation modeling.
We propose a novel implicit multi-view guidance to address information loss during image degradation and restoration.
arXiv Detail & Related papers (2024-01-06T16:54:02Z) - Hyb-NeRF: A Multiresolution Hybrid Encoding for Neural Radiance Fields [12.335934855851486]
We present Hyb-NeRF, a novel neural radiance field with a multi-resolution hybrid encoding.
We show that Hyb-NeRF achieves faster rendering speed with better rending quality and even a lower memory footprint in comparison to previous methods.
arXiv Detail & Related papers (2023-11-21T10:01:08Z) - Enhance-NeRF: Multiple Performance Evaluation for Neural Radiance Fields [2.5432277893532116]
Neural Radiance Fields (NeRF) can generate realistic images from any viewpoint.
NeRF-based models are susceptible to interference issues caused by colored "fog" noise.
Our approach, coined Enhance-NeRF, adopts joint color to balance low and high reflectivity objects display.
arXiv Detail & Related papers (2023-06-08T15:49:30Z) - NeRFLiX: High-Quality Neural View Synthesis by Learning a
Degradation-Driven Inter-viewpoint MiXer [44.220611552133036]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm by learning a degradation-driven inter-viewpoint mixer.
We also propose an inter-viewpoint aggregation framework that is able to fuse highly related high-quality training images.
arXiv Detail & Related papers (2023-03-13T08:36:30Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.