Lighthouse: Predicting Lighting Volumes for Spatially-Coherent
Illumination
- URL: http://arxiv.org/abs/2003.08367v2
- Date: Wed, 13 May 2020 17:04:29 GMT
- Title: Lighthouse: Predicting Lighting Volumes for Spatially-Coherent
Illumination
- Authors: Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T.
Barron, Richard Tucker, Noah Snavely
- Abstract summary: We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.
Our model is trained without any ground truth 3D data and only requires a held-out perspective view near the input stereo pair and a spherical panorama taken within each scene as supervision.
We demonstrate that our method can predict consistent spatially-varying lighting that is convincing enough to plausibly relight and insert highly specular virtual objects into real images.
- Score: 84.00096195633793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a deep learning solution for estimating the incident illumination
at any 3D location within a scene from an input narrow-baseline stereo image
pair. Previous approaches for predicting global illumination from images either
predict just a single illumination for the entire scene, or separately estimate
the illumination at each 3D location without enforcing that the predictions are
consistent with the same 3D scene. Instead, we propose a deep learning model
that estimates a 3D volumetric RGBA model of a scene, including content outside
the observed field of view, and then uses standard volume rendering to estimate
the incident illumination at any 3D location within that volume. Our model is
trained without any ground truth 3D data and only requires a held-out
perspective view near the input stereo pair and a spherical panorama taken
within each scene as supervision, as opposed to prior methods for
spatially-varying lighting estimation, which require ground truth scene
geometry for training. We demonstrate that our method can predict consistent
spatially-varying lighting that is convincing enough to plausibly relight and
insert highly specular virtual objects into real images.
Related papers
- Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Towards High-Fidelity Single-view Holistic Reconstruction of Indoor
Scenes [50.317223783035075]
We present a new framework to reconstruct holistic 3D indoor scenes from single-view images.
We propose an instance-aligned implicit function (InstPIFu) for detailed object reconstruction.
Our code and model will be made publicly available.
arXiv Detail & Related papers (2022-07-18T14:54:57Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Lighting, Reflectance and Geometry Estimation from 360$^{\circ}$
Panoramic Stereo [88.14090671267907]
We propose a method for estimating high-definition spatially-varying lighting, reflectance, and geometry of a scene from 360$circ$ stereo images.
Our model takes advantage of the 360$circ$ input to observe the entire scene with geometric detail, then jointly estimates the scene's properties with physical constraints.
arXiv Detail & Related papers (2021-04-20T10:41:50Z) - EMLight: Lighting Estimation via Spherical Distribution Approximation [33.26530733479459]
We propose an illumination estimation framework that leverages a regression network and a neural projector for accurate illumination estimation.
We decompose the illumination map into spherical light distribution, light intensity and the ambient term.
Under the guidance of the predicted spherical distribution, light intensity and ambient term, the neural projector synthesizes panoramic illumination maps with realistic light frequency.
arXiv Detail & Related papers (2020-12-21T04:54:08Z) - NeRV: Neural Reflectance and Visibility Fields for Relighting and View
Synthesis [45.71507069571216]
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting.
This produces a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.
arXiv Detail & Related papers (2020-12-07T18:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.