Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo
- URL: http://arxiv.org/abs/2103.12106v1
- Date: Mon, 22 Mar 2021 18:06:58 GMT
- Title: Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo
- Authors: David Honz\'atko, Engin T\"uretken, Pascal Fua, L. Andrea Dunbar
- Abstract summary: We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
- Score: 61.6260594326246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of estimating a surface shape from its observed reflectance
properties still remains a challenging task in computer vision. The presence of
global illumination effects such as inter-reflections or cast shadows makes the
task particularly difficult for non-convex real-world surfaces.
State-of-the-art methods for calibrated photometric stereo address these issues
using convolutional neural networks (CNNs) that primarily aim to capture either
the spatial context among adjacent pixels or the photometric one formed by
illuminating a sample from adjacent directions.
In this paper, we bridge these two objectives and introduce an efficient
fully-convolutional architecture that can leverage both spatial and photometric
context simultaneously. In contrast to existing approaches that rely on
standard 2D CNNs and regress directly to surface normals, we argue that using
separable 4D convolutions and regressing to 2D Gaussian heat-maps severely
reduces the size of the network and makes inference more efficient. Our
experimental results on a real-world photometric stereo benchmark show that the
proposed approach outperforms the existing methods both in efficiency and
accuracy.
Related papers
- RMAFF-PSN: A Residual Multi-Scale Attention Feature Fusion Photometric Stereo Network [37.759675702107586]
Predicting accurate maps of objects from two-dimensional images in regions of complex structure spatial material variations is challenging.
We propose a method of calibrated feature information from different resolution stages and scales of the image.
This approach preserves more physical information, such as texture and geometry of the object in complex regions.
arXiv Detail & Related papers (2024-04-11T14:05:37Z) - A Neural Height-Map Approach for the Binocular Photometric Stereo
Problem [36.404880059833324]
binocular photometric stereo (PS) framework has same acquisition speed as single view PS, however significantly improves the quality of the estimated geometry.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV dataset adapted to binocular stereo setup as well as a new binocular photometric stereo dataset - LUCES-ST.
arXiv Detail & Related papers (2023-11-10T09:45:53Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - PS-Transformer: Learning Sparse Photometric Stereo Network using
Self-Attention Mechanism [4.822598110892846]
Existing deep calibrated photometric stereo networks aggregate observations under different lights based on pre-defined operations such as linear projection and max pooling.
To tackle this issue, this paper presents a deep sparse calibrated photometric stereo network named it PS-Transformer which leverages the learnable self-attention mechanism to properly capture the complex inter-image interactions.
arXiv Detail & Related papers (2022-11-21T11:58:25Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Degradation-agnostic Correspondence from Resolution-asymmetric Stereo [96.03964515969652]
We study the problem of stereo matching from a pair of images with different resolutions, e.g., those acquired with a tele-wide camera system.
We propose to impose the consistency between two views in a feature space instead of the image space, named feature-metric consistency.
We find that, although a stereo matching network trained with the photometric loss is not optimal, its feature extractor can produce degradation-agnostic and matching-specific features.
arXiv Detail & Related papers (2022-04-04T12:24:34Z) - Learning Inter- and Intra-frame Representations for Non-Lambertian
Photometric Stereo [14.5172791293107]
We build a two-stage Convolutional Neural Network (CNN) architecture to construct inter- and intra-frame representations.
We experimentally investigate numerous network design alternatives for identifying the optimal scheme to deploy inter-frame and intra-frame feature extraction modules.
arXiv Detail & Related papers (2020-12-26T11:22:56Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - Deep Photometric Stereo for Non-Lambertian Surfaces [89.05501463107673]
We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
arXiv Detail & Related papers (2020-07-26T15:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.