Few-shot Neural Radiance Fields Under Unconstrained Illumination
- URL: http://arxiv.org/abs/2303.11728v3
- Date: Mon, 18 Dec 2023 10:40:12 GMT
- Title: Few-shot Neural Radiance Fields Under Unconstrained Illumination
- Authors: SeokYeong Lee, JunYong Choi, Seungryong Kim, Ig-Jae Kim, Junghyun Cho
- Abstract summary: We introduce a new challenge for synthesizing novel view images in practical environments with limited input multi-view images and varying lighting conditions.
NeRF, one of the pioneering works for this task, demand an extensive set of multi-view images taken under constrained illumination.
We suggest ExtremeNeRF, which utilizes multi-view albedo consistency, supported by geometric alignment.
- Score: 40.384916810850385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a new challenge for synthesizing novel view
images in practical environments with limited input multi-view images and
varying lighting conditions. Neural radiance fields (NeRF), one of the
pioneering works for this task, demand an extensive set of multi-view images
taken under constrained illumination, which is often unattainable in real-world
settings. While some previous works have managed to synthesize novel views
given images with different illumination, their performance still relies on a
substantial number of input multi-view images. To address this problem, we
suggest ExtremeNeRF, which utilizes multi-view albedo consistency, supported by
geometric alignment. Specifically, we extract intrinsic image components that
should be illumination-invariant across different views, enabling direct
appearance comparison between the input and novel view under unconstrained
illumination. We offer thorough experimental results for task evaluation,
employing the newly created NeRF Extreme benchmark-the first in-the-wild
benchmark for novel view synthesis under multiple viewing directions and
varying illuminations.
Related papers
- Sampling for View Synthesis: From Local Light Field Fusion to Neural Radiance Fields and Beyond [27.339452004523082]
Local light field fusion proposes an algorithm for practical view synthesis from an irregular grid of sampled views.
We achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.
We reprise some of the recent results on sparse and even single image view synthesis.
arXiv Detail & Related papers (2024-08-08T16:56:03Z) - MultiDiff: Consistent Novel View Synthesis from a Single Image [60.04215655745264]
MultiDiff is a novel approach for consistent novel view synthesis of scenes from a single RGB image.
Our results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet.
arXiv Detail & Related papers (2024-06-26T17:53:51Z) - Learning Novel View Synthesis from Heterogeneous Low-light Captures [7.888623669945243]
We propose to decompose illumination, reflectance, and noise from input views according to that reflectance remains invariant across heterogeneous views.
To cope with heterogeneous brightness and noise levels across multi-views, we learn an illumination embedding and optimize a noise map individually for each view.
arXiv Detail & Related papers (2024-03-20T06:44:26Z) - SAMPLING: Scene-adaptive Hierarchical Multiplane Images Representation
for Novel View Synthesis from a Single Image [60.52991173059486]
We introduce SAMPLING, a Scene-adaptive Hierarchical Multiplane Images Representation for Novel View Synthesis from a Single Image.
Our method demonstrates considerable performance gains in large-scale unbounded outdoor scenes using a single image on the KITTI dataset.
arXiv Detail & Related papers (2023-09-12T15:33:09Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - Crowdsampling the Plenoptic Function [56.10020793913216]
We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
arXiv Detail & Related papers (2020-07-30T02:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.