Lighting, Reflectance and Geometry Estimation from 360$^{\circ}$
Panoramic Stereo
- URL: http://arxiv.org/abs/2104.09886v1
- Date: Tue, 20 Apr 2021 10:41:50 GMT
- Title: Lighting, Reflectance and Geometry Estimation from 360$^{\circ}$
Panoramic Stereo
- Authors: Junxuan Li, Hongdong Li and Yasuyuki Matsushita
- Abstract summary: We propose a method for estimating high-definition spatially-varying lighting, reflectance, and geometry of a scene from 360$circ$ stereo images.
Our model takes advantage of the 360$circ$ input to observe the entire scene with geometric detail, then jointly estimates the scene's properties with physical constraints.
- Score: 88.14090671267907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a method for estimating high-definition spatially-varying
lighting, reflectance, and geometry of a scene from 360$^{\circ}$ stereo
images. Our model takes advantage of the 360$^{\circ}$ input to observe the
entire scene with geometric detail, then jointly estimates the scene's
properties with physical constraints. We first reconstruct a near-field
environment light for predicting the lighting at any 3D location within the
scene. Then we present a deep learning model that leverages the stereo
information to infer the reflectance and surface normal. Lastly, we incorporate
the physical constraints between lighting and geometry to refine the
reflectance of the scene. Both quantitative and qualitative experiments show
that our method, benefiting from the 360$^{\circ}$ observation of the scene,
outperforms prior state-of-the-art methods and enables more augmented reality
applications such as mirror-objects insertion.
Related papers
- 3D Scene Geometry Estimation from 360$^\circ$ Imagery: A Survey [1.3654846342364308]
This paper provides a comprehensive survey on pioneer and state-of-the-art 3D scene geometry estimation methodologies.
We first revisit the basic concepts of the spherical camera model, and review the most common acquisition technologies and representation formats.
We then survey monocular layout and depth inference approaches, highlighting the recent advances in learning-based solutions suited for spherical data.
arXiv Detail & Related papers (2024-01-17T14:57:27Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - LUCES: A Dataset for Near-Field Point Light Source Photometric Stereo [30.31403197697561]
We introduce LUCES, the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of a varying of materials.
A device counting 52 LEDs has been designed to lit each object positioned 10 to 30 centimeters away from the camera.
We evaluate the performance of the latest near-field Photometric Stereo algorithms on the proposed dataset.
arXiv Detail & Related papers (2021-04-27T12:30:42Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Lighthouse: Predicting Lighting Volumes for Spatially-Coherent
Illumination [84.00096195633793]
We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair.
Our model is trained without any ground truth 3D data and only requires a held-out perspective view near the input stereo pair and a spherical panorama taken within each scene as supervision.
We demonstrate that our method can predict consistent spatially-varying lighting that is convincing enough to plausibly relight and insert highly specular virtual objects into real images.
arXiv Detail & Related papers (2020-03-18T17:46:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.