Deep Photometric Stereo for Non-Lambertian Surfaces
- URL: http://arxiv.org/abs/2007.13145v1
- Date: Sun, 26 Jul 2020 15:20:53 GMT
- Title: Deep Photometric Stereo for Non-Lambertian Surfaces
- Authors: Guanying Chen, Kai Han, Boxin Shi, Yasuyuki Matsushita, Kwan-Yee K.
Wong
- Abstract summary: We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
- Score: 89.05501463107673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the problem of photometric stereo, in both calibrated
and uncalibrated scenarios, for non-Lambertian surfaces based on deep learning.
We first introduce a fully convolutional deep network for calibrated
photometric stereo, which we call PS-FCN. Unlike traditional approaches that
adopt simplified reflectance models to make the problem tractable, our method
directly learns the mapping from reflectance observations to surface normal,
and is able to handle surfaces with general and unknown isotropic reflectance.
At test time, PS-FCN takes an arbitrary number of images and their associated
light directions as input and predicts a surface normal map of the scene in a
fast feed-forward pass. To deal with the uncalibrated scenario where light
directions are unknown, we introduce a new convolutional network, named LCNet,
to estimate light directions from input images. The estimated light directions
and the input images are then fed to PS-FCN to determine the surface normals.
Our method does not require a pre-defined set of light directions and can
handle multiple images in an order-agnostic manner. Thorough evaluation of our
approach on both synthetic and real datasets shows that it outperforms
state-of-the-art methods in both calibrated and uncalibrated scenarios.
Related papers
- A Neural Height-Map Approach for the Binocular Photometric Stereo
Problem [36.404880059833324]
binocular photometric stereo (PS) framework has same acquisition speed as single view PS, however significantly improves the quality of the estimated geometry.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV dataset adapted to binocular stereo setup as well as a new binocular photometric stereo dataset - LUCES-ST.
arXiv Detail & Related papers (2023-11-10T09:45:53Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - PS-Transformer: Learning Sparse Photometric Stereo Network using
Self-Attention Mechanism [4.822598110892846]
Existing deep calibrated photometric stereo networks aggregate observations under different lights based on pre-defined operations such as linear projection and max pooling.
To tackle this issue, this paper presents a deep sparse calibrated photometric stereo network named it PS-Transformer which leverages the learnable self-attention mechanism to properly capture the complex inter-image interactions.
arXiv Detail & Related papers (2022-11-21T11:58:25Z) - A CNN Based Approach for the Point-Light Photometric Stereo Problem [26.958763133729846]
We propose a CNN-based approach capable of handling realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo.
Our approach outperforms the state-of-the-art on the DiLiGenT real world dataset.
In order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo'
arXiv Detail & Related papers (2022-10-10T12:57:12Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Uncertainty-Aware Deep Multi-View Photometric Stereo [100.97116470055273]
Photometric stereo (PS) is excellent at recovering high-frequency surface details, whereas multi-view stereo (MVS) can help remove the low-frequency distortion due to PS and retain the global shape.
This paper proposes an approach that can effectively utilize such complementary strengths of PS and MVS.
We estimate per-pixel surface normals and depth using an uncertainty-aware deep-PS network and deep-MVS network, respectively.
arXiv Detail & Related papers (2022-02-26T05:45:52Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.