A Neural Height-Map Approach for the Binocular Photometric Stereo
Problem
- URL: http://arxiv.org/abs/2311.05958v1
- Date: Fri, 10 Nov 2023 09:45:53 GMT
- Title: A Neural Height-Map Approach for the Binocular Photometric Stereo
Problem
- Authors: Fotios Logothetis, Ignas Budvytis, Roberto Cipolla
- Abstract summary: binocular photometric stereo (PS) framework has same acquisition speed as single view PS, however significantly improves the quality of the estimated geometry.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV dataset adapted to binocular stereo setup as well as a new binocular photometric stereo dataset - LUCES-ST.
- Score: 36.404880059833324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we propose a novel, highly practical, binocular photometric
stereo (PS) framework, which has same acquisition speed as single view PS,
however significantly improves the quality of the estimated geometry.
As in recent neural multi-view shape estimation frameworks such as NeRF,
SIREN and inverse graphics approaches to multi-view photometric stereo (e.g.
PS-NeRF) we formulate shape estimation task as learning of a differentiable
surface and texture representation by minimising surface normal discrepancy for
normals estimated from multiple varying light images for two views as well as
discrepancy between rendered surface intensity and observed images. Our method
differs from typical multi-view shape estimation approaches in two key ways.
First, our surface is represented not as a volume but as a neural heightmap
where heights of points on a surface are computed by a deep neural network.
Second, instead of predicting an average intensity as PS-NeRF or introducing
lambertian material assumptions as Guo et al., we use a learnt BRDF and perform
near-field per point intensity rendering.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV
dataset adapted to binocular stereo setup as well as a new binocular
photometric stereo dataset - LUCES-ST.
Related papers
- NPLMV-PS: Neural Point-Light Multi-View Photometric Stereo [32.39157133181186]
We present a novel multi-view photometric stereo (MVPS) method.
Our work differs from the state-of-the-art multi-view PS-NeRF or Supernormal in that we explicitly leverage per-pixel intensity renderings.
Our method is among the first (along with Supernormal) to outperform the classical MVPS approach proposed by the DiLiGenT-MV benchmark.
arXiv Detail & Related papers (2024-05-20T14:26:07Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - Uncertainty-Aware Deep Multi-View Photometric Stereo [100.97116470055273]
Photometric stereo (PS) is excellent at recovering high-frequency surface details, whereas multi-view stereo (MVS) can help remove the low-frequency distortion due to PS and retain the global shape.
This paper proposes an approach that can effectively utilize such complementary strengths of PS and MVS.
We estimate per-pixel surface normals and depth using an uncertainty-aware deep-PS network and deep-MVS network, respectively.
arXiv Detail & Related papers (2022-02-26T05:45:52Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Learning Inter- and Intra-frame Representations for Non-Lambertian
Photometric Stereo [14.5172791293107]
We build a two-stage Convolutional Neural Network (CNN) architecture to construct inter- and intra-frame representations.
We experimentally investigate numerous network design alternatives for identifying the optimal scheme to deploy inter-frame and intra-frame feature extraction modules.
arXiv Detail & Related papers (2020-12-26T11:22:56Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - Deep Photometric Stereo for Non-Lambertian Surfaces [89.05501463107673]
We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
arXiv Detail & Related papers (2020-07-26T15:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.