A CNN Based Approach for the Point-Light Photometric Stereo Problem
- URL: http://arxiv.org/abs/2210.04655v1
- Date: Mon, 10 Oct 2022 12:57:12 GMT
- Title: A CNN Based Approach for the Point-Light Photometric Stereo Problem
- Authors: Fotios Logothetis, Roberto Mecca, Ignas Budvytis, Roberto Cipolla
- Abstract summary: We propose a CNN-based approach capable of handling realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo.
Our approach outperforms the state-of-the-art on the DiLiGenT real world dataset.
In order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo'
- Score: 26.958763133729846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing the 3D shape of an object using several images under different
light sources is a very challenging task, especially when realistic assumptions
such as light propagation and attenuation, perspective viewing geometry and
specular light reflection are considered. Many of works tackling Photometric
Stereo (PS) problems often relax most of the aforementioned assumptions.
Especially they ignore specular reflection and global illumination effects. In
this work, we propose a CNN-based approach capable of handling these realistic
assumptions by leveraging recent improvements of deep neural networks for
far-field Photometric Stereo and adapt them to the point light setup. We
achieve this by employing an iterative procedure of point-light PS for shape
estimation which has two main steps. Firstly we train a per-pixel CNN to
predict surface normals from reflectance samples. Secondly, we compute the
depth by integrating the normal field in order to iteratively estimate light
directions and attenuation which is used to compensate the input images to
compute reflectance samples for the next iteration.
Our approach sigificantly outperforms the state-of-the-art on the DiLiGenT
real world dataset. Furthermore, in order to measure the performance of our
approach for near-field point-light source PS data, we introduce LUCES the
first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo'
of 14 objects of different materials were the effects of point light sources
and perspective viewing are a lot more significant. Our approach also
outperforms the competition on this dataset as well. Data and test code are
available at the project page.
Related papers
- Transientangelo: Few-Viewpoint Surface Reconstruction Using Single-Photon Lidar [8.464054039931245]
Lidar captures 3D scene geometry by emitting pulses of light to a target and recording the speed-of-light time delay of the reflected light.
conventional lidar systems do not output the raw, captured waveforms of backscattered light.
We develop new regularization strategies that improve robustness to photon noise, enabling accurate surface reconstruction with as few as 10 photons per pixel.
arXiv Detail & Related papers (2024-08-22T08:12:09Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - LUCES: A Dataset for Near-Field Point Light Source Photometric Stereo [30.31403197697561]
We introduce LUCES, the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of a varying of materials.
A device counting 52 LEDs has been designed to lit each object positioned 10 to 30 centimeters away from the camera.
We evaluate the performance of the latest near-field Photometric Stereo algorithms on the proposed dataset.
arXiv Detail & Related papers (2021-04-27T12:30:42Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - GMLight: Lighting Estimation via Geometric Distribution Approximation [86.95367898017358]
This paper presents a lighting estimation framework that employs a regression network and a generative projector for effective illumination estimation.
We parameterize illumination scenes in terms of the geometric light distribution, light intensity, ambient term, and auxiliary depth, and estimate them as a pure regression task.
With the estimated lighting parameters, the generative projector synthesizes panoramic illumination maps with realistic appearance and frequency.
arXiv Detail & Related papers (2021-02-20T03:31:52Z) - A CNN Based Approach for the Near-Field Photometric Stereo Problem [26.958763133729846]
We propose the first CNN based approach capable of handling realistic assumptions in Photometric Stereo.
We leverage recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to near field setup.
Our method outperforms competing state-of-the-art near-field Photometric Stereo approaches on both synthetic and real experiments.
arXiv Detail & Related papers (2020-09-12T13:28:28Z) - Deep Lighting Environment Map Estimation from Spherical Panoramas [0.0]
We present a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama.
We exploit the availability of surface geometry to employ image-based relighting as a data generator and supervision mechanism.
arXiv Detail & Related papers (2020-05-16T14:23:05Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.