A CNN Based Approach for the Near-Field Photometric Stereo Problem
- URL: http://arxiv.org/abs/2009.05792v1
- Date: Sat, 12 Sep 2020 13:28:28 GMT
- Title: A CNN Based Approach for the Near-Field Photometric Stereo Problem
- Authors: Fotios Logothetis, Ignas Budvytis, Roberto Mecca, Roberto Cipolla
- Abstract summary: We propose the first CNN based approach capable of handling realistic assumptions in Photometric Stereo.
We leverage recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to near field setup.
Our method outperforms competing state-of-the-art near-field Photometric Stereo approaches on both synthetic and real experiments.
- Score: 26.958763133729846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing the 3D shape of an object using several images under different
light sources is a very challenging task, especially when realistic assumptions
such as light propagation and attenuation, perspective viewing geometry and
specular light reflection are considered. Many of works tackling Photometric
Stereo (PS) problems often relax most of the aforementioned assumptions.
Especially they ignore specular reflection and global illumination effects. In
this work, we propose the first CNN based approach capable of handling these
realistic assumptions in Photometric Stereo. We leverage recent improvements of
deep neural networks for far-field Photometric Stereo and adapt them to near
field setup. We achieve this by employing an iterative procedure for shape
estimation which has two main steps. Firstly we train a per-pixel CNN to
predict surface normals from reflectance samples. Secondly, we compute the
depth by integrating the normal field in order to iteratively estimate light
directions and attenuation which is used to compensate the input images to
compute reflectance samples for the next iteration. To the best of our
knowledge this is the first near-field framework which is able to accurately
predict 3D shape from highly specular objects. Our method outperforms competing
state-of-the-art near-field Photometric Stereo approaches on both synthetic and
real experiments.
Related papers
- Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - A CNN Based Approach for the Point-Light Photometric Stereo Problem [26.958763133729846]
We propose a CNN-based approach capable of handling realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo.
Our approach outperforms the state-of-the-art on the DiLiGenT real world dataset.
In order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo'
arXiv Detail & Related papers (2022-10-10T12:57:12Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - DeepPS2: Revisiting Photometric Stereo Using Two Differently Illuminated
Images [27.58399208954106]
Photometric stereo is a problem of recovering 3D surface normals using images of an object captured under different lightings.
We propose an inverse rendering-based deep learning framework, called DeepPS2, that jointly performs surface normal, albedo, lighting estimation, and image relighting.
arXiv Detail & Related papers (2022-07-05T13:14:10Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - LUCES: A Dataset for Near-Field Point Light Source Photometric Stereo [30.31403197697561]
We introduce LUCES, the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of a varying of materials.
A device counting 52 LEDs has been designed to lit each object positioned 10 to 30 centimeters away from the camera.
We evaluate the performance of the latest near-field Photometric Stereo algorithms on the proposed dataset.
arXiv Detail & Related papers (2021-04-27T12:30:42Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.