FlipNeRF: Flipped Reflection Rays for Few-shot Novel View Synthesis
- URL: http://arxiv.org/abs/2306.17723v4
- Date: Mon, 14 Aug 2023 13:41:48 GMT
- Title: FlipNeRF: Flipped Reflection Rays for Few-shot Novel View Synthesis
- Authors: Seunghyeon Seo, Yeonjin Chang, Nojun Kwak
- Abstract summary: We propose FlipNeRF, a novel regularization method for few-shot novel view synthesis by utilizing our proposed flipped reflection rays.
FlipNeRF is able to estimate more reliable outputs with reducing floating artifacts effectively across the different scene structures.
- Score: 30.25904672829623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Field (NeRF) has been a mainstream in novel view synthesis
with its remarkable quality of rendered images and simple architecture.
Although NeRF has been developed in various directions improving continuously
its performance, the necessity of a dense set of multi-view images still exists
as a stumbling block to progress for practical application. In this work, we
propose FlipNeRF, a novel regularization method for few-shot novel view
synthesis by utilizing our proposed flipped reflection rays. The flipped
reflection rays are explicitly derived from the input ray directions and
estimated normal vectors, and play a role of effective additional training rays
while enabling to estimate more accurate surface normals and learn the 3D
geometry effectively. Since the surface normal and the scene depth are both
derived from the estimated densities along a ray, the accurate surface normal
leads to more exact depth estimation, which is a key factor for few-shot novel
view synthesis. Furthermore, with our proposed Uncertainty-aware Emptiness Loss
and Bottleneck Feature Consistency Loss, FlipNeRF is able to estimate more
reliable outputs with reducing floating artifacts effectively across the
different scene structures, and enhance the feature-level consistency between
the pair of the rays cast toward the photo-consistent pixels without any
additional feature extractor, respectively. Our FlipNeRF achieves the SOTA
performance on the multiple benchmarks across all the scenarios.
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - NeRFReN: Neural Radiance Fields with Reflections [16.28256369376256]
We introduce NeRFReN, which is built upon NeRF to model scenes with reflections.
We propose to split a scene into transmitted and reflected components, and model the two components with separate neural radiance fields.
Experiments on various self-captured scenes show that our method achieves high-quality novel view synthesis and physically sound depth estimation results.
arXiv Detail & Related papers (2021-11-30T09:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.