NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning
- URL: http://arxiv.org/abs/2403.01325v1
- Date: Sat, 2 Mar 2024 22:08:10 GMT
- Title: NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning
- Authors: Linsheng Chen, Guangrun Wang, Liuchun Yuan, Keze Wang, Ken Deng,
Philip H.S. Torr
- Abstract summary: We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
- Score: 63.39461847093663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) have garnered remarkable success in novel view
synthesis. Nonetheless, the task of generating high-quality images for novel
views persists as a critical challenge. While the existing efforts have
exhibited commendable progress, capturing intricate details, enhancing
textures, and achieving superior Peak Signal-to-Noise Ratio (PSNR) metrics
warrant further focused attention and advancement. In this work, we propose
NeRF-VPT, an innovative method for novel view synthesis to address these
challenges. Our proposed NeRF-VPT employs a cascading view prompt tuning
paradigm, wherein RGB information gained from preceding rendering outcomes
serves as instructive visual prompts for subsequent rendering stages, with the
aspiration that the prior knowledge embedded in the prompts can facilitate the
gradual enhancement of rendered image quality. NeRF-VPT only requires sampling
RGB data from previous stage renderings as priors at each training stage,
without relying on extra guidance or complex techniques. Thus, our NeRF-VPT is
plug-and-play and can be readily integrated into existing methods. By
conducting comparative analyses of our NeRF-VPT against several NeRF-based
approaches on demanding real-scene benchmarks, such as Realistic Synthetic 360,
Real Forward-Facing, Replica dataset, and a user-captured dataset, we
substantiate that our NeRF-VPT significantly elevates baseline performance and
proficiently generates more high-quality novel view images than all the
compared state-of-the-art methods. Furthermore, the cascading learning of
NeRF-VPT introduces adaptability to scenarios with sparse inputs, resulting in
a significant enhancement of accuracy for sparse-view novel view synthesis. The
source code and dataset are available at
\url{https://github.com/Freedomcls/NeRF-VPT}.
Related papers
- NeRF-DetS: Enhancing Multi-View 3D Object Detection with Sampling-adaptive Network of Continuous NeRF-based Representation [60.47114985993196]
NeRF-Det unifies the tasks of novel view arithmetic and 3D perception.
We introduce a novel 3D perception network structure, NeRF-DetS.
NeRF-DetS outperforms competitive NeRF-Det on the ScanNetV2 dataset.
arXiv Detail & Related papers (2024-04-22T06:59:03Z) - FlipNeRF: Flipped Reflection Rays for Few-shot Novel View Synthesis [30.25904672829623]
We propose FlipNeRF, a novel regularization method for few-shot novel view synthesis by utilizing our proposed flipped reflection rays.
FlipNeRF is able to estimate more reliable outputs with reducing floating artifacts effectively across the different scene structures.
arXiv Detail & Related papers (2023-06-30T15:11:00Z) - From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm [57.73868344064043]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer.
We also present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer.
NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
arXiv Detail & Related papers (2023-06-10T09:19:19Z) - NeRFLiX: High-Quality Neural View Synthesis by Learning a
Degradation-Driven Inter-viewpoint MiXer [44.220611552133036]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm by learning a degradation-driven inter-viewpoint mixer.
We also propose an inter-viewpoint aggregation framework that is able to fuse highly related high-quality training images.
arXiv Detail & Related papers (2023-03-13T08:36:30Z) - PANeRF: Pseudo-view Augmentation for Improved Neural Radiance Fields
Based on Few-shot Inputs [3.818285175392197]
neural radiance fields (NeRF) have promising applications for novel views of complex scenes.
NeRF requires dense input views, typically numbering in the hundreds, for generating high-quality images.
We propose pseudo-view augmentation of NeRF, a scheme that expands a sufficient amount of data by considering the geometry of few-shot inputs.
arXiv Detail & Related papers (2022-11-23T08:01:10Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.