Crowdsampling the Plenoptic Function
- URL: http://arxiv.org/abs/2007.15194v1
- Date: Thu, 30 Jul 2020 02:52:10 GMT
- Title: Crowdsampling the Plenoptic Function
- Authors: Zhengqi Li, Wenqi Xian, Abe Davis, Noah Snavely
- Abstract summary: We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
- Score: 56.10020793913216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many popular tourist landmarks are captured in a multitude of online, public
photos. These photos represent a sparse and unstructured sampling of the
plenoptic function for a particular scene. In this paper,we present a new
approach to novel view synthesis under time-varying illumination from such
data. Our approach builds on the recent multi-plane image (MPI) format for
representing local light fields under fixed viewing conditions. We introduce a
new DeepMPI representation, motivated by observations on the sparsity structure
of the plenoptic function, that allows for real-time synthesis of
photorealistic views that are continuous in both space and across changes in
lighting. Our method can synthesize the same compelling parallax and
view-dependent effects as previous MPI methods, while simultaneously
interpolating along changes in reflectance and illumination with time. We show
how to learn a model of these effects in an unsupervised way from an
unstructured collection of photos without temporal registration, demonstrating
significant improvements over recent work in neural rendering. More information
can be found crowdsampling.io.
Related papers
- Sampling for View Synthesis: From Local Light Field Fusion to Neural Radiance Fields and Beyond [27.339452004523082]
Local light field fusion proposes an algorithm for practical view synthesis from an irregular grid of sampled views.
We achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.
We reprise some of the recent results on sparse and even single image view synthesis.
arXiv Detail & Related papers (2024-08-08T16:56:03Z) - MultiDiff: Consistent Novel View Synthesis from a Single Image [60.04215655745264]
MultiDiff is a novel approach for consistent novel view synthesis of scenes from a single RGB image.
Our results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet.
arXiv Detail & Related papers (2024-06-26T17:53:51Z) - Flying with Photons: Rendering Novel Views of Propagating Light [37.06220870989172]
We present an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints.
Our approach relies on a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with pico-second-level temporal resolution.
arXiv Detail & Related papers (2024-04-09T17:48:52Z) - SAMPLING: Scene-adaptive Hierarchical Multiplane Images Representation
for Novel View Synthesis from a Single Image [60.52991173059486]
We introduce SAMPLING, a Scene-adaptive Hierarchical Multiplane Images Representation for Novel View Synthesis from a Single Image.
Our method demonstrates considerable performance gains in large-scale unbounded outdoor scenes using a single image on the KITTI dataset.
arXiv Detail & Related papers (2023-09-12T15:33:09Z) - Neural Scene Chronology [79.51094408119148]
We aim to reconstruct a time-varying 3D model, capable of rendering photo-realistic renderings with independent control of viewpoint, illumination, and time.
In this work, we represent the scene as a space-time radiance field with a per-image illumination embedding, where temporally-varying scene changes are encoded using a set of learned step functions.
arXiv Detail & Related papers (2023-06-13T17:59:58Z) - Few-shot Neural Radiance Fields Under Unconstrained Illumination [40.384916810850385]
We introduce a new challenge for synthesizing novel view images in practical environments with limited input multi-view images and varying lighting conditions.
NeRF, one of the pioneering works for this task, demand an extensive set of multi-view images taken under constrained illumination.
We suggest ExtremeNeRF, which utilizes multi-view albedo consistency, supported by geometric alignment.
arXiv Detail & Related papers (2023-03-21T10:32:27Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.