Ray Priors through Reprojection: Improving Neural Radiance Fields for
Novel View Extrapolation
- URL: http://arxiv.org/abs/2205.05922v1
- Date: Thu, 12 May 2022 07:21:17 GMT
- Title: Ray Priors through Reprojection: Improving Neural Radiance Fields for
Novel View Extrapolation
- Authors: Jian Zhang, Yuanqing Zhang, Huan Fu, Xiaowei Zhou, Bowen Cai, Jinchi
Huang, Rongfei Jia, Binqiang Zhao, Xing Tang
- Abstract summary: We study the novel view extrapolation setting that (1) the training images can well describe an object, and (2) there is a notable discrepancy between the training and test viewpoints' distributions.
We propose a random ray casting policy that allows training unseen views using seen views.
A ray atlas pre-computed from the observed rays' viewing directions could further enhance the rendering quality for extrapolated views.
- Score: 35.47411859184933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRF) have emerged as a potent paradigm for
representing scenes and synthesizing photo-realistic images. A main limitation
of conventional NeRFs is that they often fail to produce high-quality
renderings under novel viewpoints that are significantly different from the
training viewpoints. In this paper, instead of exploiting few-shot image
synthesis, we study the novel view extrapolation setting that (1) the training
images can well describe an object, and (2) there is a notable discrepancy
between the training and test viewpoints' distributions. We present RapNeRF
(RAy Priors) as a solution. Our insight is that the inherent appearances of a
3D surface's arbitrary visible projections should be consistent. We thus
propose a random ray casting policy that allows training unseen views using
seen views. Furthermore, we show that a ray atlas pre-computed from the
observed rays' viewing directions could further enhance the rendering quality
for extrapolated views. A main limitation is that RapNeRF would remove the
strong view-dependent effects because it leverages the multi-view consistency
property.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Simple-RF: Regularizing Sparse Input Radiance Fields with Simpler Solutions [5.699788926464751]
Neural Radiance Fields (NeRF) show impressive performance in photo-realistic free-view rendering of scenes.
Recent improvements on the NeRF such as TensoRF and ZipNeRF employ explicit models for faster optimization and rendering.
We show that supervising the depth estimated by a radiance field helps train it effectively with fewer views.
arXiv Detail & Related papers (2024-04-29T18:00:25Z) - Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields [65.96818069005145]
Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
arXiv Detail & Related papers (2023-03-10T09:28:09Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - NeRFReN: Neural Radiance Fields with Reflections [16.28256369376256]
We introduce NeRFReN, which is built upon NeRF to model scenes with reflections.
We propose to split a scene into transmitted and reflected components, and model the two components with separate neural radiance fields.
Experiments on various self-captured scenes show that our method achieves high-quality novel view synthesis and physically sound depth estimation results.
arXiv Detail & Related papers (2021-11-30T09:36:00Z) - Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis [86.38901313994734]
We present DietNeRF, a 3D neural scene representation estimated from a few images.
NeRF learns a continuous volumetric representation of a scene through multi-view consistency.
We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses.
arXiv Detail & Related papers (2021-04-01T17:59:31Z) - Baking Neural Radiance Fields for Real-Time View Synthesis [41.07052395570522]
We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG)
The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact, and can be rendered in real-time.
arXiv Detail & Related papers (2021-03-26T17:59:52Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.