Uncertainty-Aware Deep Multi-View Photometric Stereo
- URL: http://arxiv.org/abs/2202.13071v1
- Date: Sat, 26 Feb 2022 05:45:52 GMT
- Title: Uncertainty-Aware Deep Multi-View Photometric Stereo
- Authors: Berk Kaya, Suryansh Kumar, Carlos Oliveira, Vittorio Ferrari, Luc Van
Gool
- Abstract summary: Photometric stereo (PS) is excellent at recovering high-frequency surface details, whereas multi-view stereo (MVS) can help remove the low-frequency distortion due to PS and retain the global shape.
This paper proposes an approach that can effectively utilize such complementary strengths of PS and MVS.
We estimate per-pixel surface normals and depth using an uncertainty-aware deep-PS network and deep-MVS network, respectively.
- Score: 100.97116470055273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a simple and effective solution to the problem of
multi-view photometric stereo (MVPS). It is well-known that photometric stereo
(PS) is excellent at recovering high-frequency surface details, whereas
multi-view stereo (MVS) can help remove the low-frequency distortion due to PS
and retain the global geometry of the shape. This paper proposes an approach
that can effectively utilize such complementary strengths of PS and MVS. Our
key idea is to suitably combine them while taking into account the per-pixel
uncertainty of their estimates. To this end, we estimate per-pixel surface
normals and depth using an uncertainty-aware deep-PS network and deep-MVS
network, respectively. Uncertainty modeling helps select reliable surface
normal and depth estimates at each pixel which then act as a true
representative of the dense surface geometry. At each pixel, our approach
either selects or discards deep-PS and deep-MVS network prediction depending on
the prediction uncertainty measure. For dense, detailed, and precise inference
of the object's surface profile, we propose to learn the implicit neural shape
representation via a multilayer perceptron (MLP). Our approach encourages the
MLP to converge to a natural zero-level set surface using the confident
prediction from deep-PS and deep-MVS networks, providing superior dense surface
reconstruction. Extensive experiments on the DiLiGenT-MV benchmark dataset show
that our method outperforms most of the existing approaches.
Related papers
- A Neural Height-Map Approach for the Binocular Photometric Stereo
Problem [36.404880059833324]
binocular photometric stereo (PS) framework has same acquisition speed as single view PS, however significantly improves the quality of the estimated geometry.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV dataset adapted to binocular stereo setup as well as a new binocular photometric stereo dataset - LUCES-ST.
arXiv Detail & Related papers (2023-11-10T09:45:53Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - Multi-View Photometric Stereo Revisited [100.97116470055273]
Multi-view photometric stereo (MVPS) is a preferred method for detailed and precise 3D acquisition of an object from images.
We present a simple, practical approach to MVPS, which works well for isotropic as well as other object material types such as anisotropic and glossy.
The proposed approach shows state-of-the-art results when tested extensively on several benchmark datasets.
arXiv Detail & Related papers (2022-10-14T09:46:15Z) - nLMVS-Net: Deep Non-Lambertian Multi-View Stereo [24.707415091168556]
We introduce a novel multi-view stereo (MVS) method that can simultaneously recover per-pixel depth but also surface normals.
Our key idea is to formulate MVS as an end-to-end learnable network, which seamlessly integrates radiometric cues to leverage surface normals as view-independent surface features.
arXiv Detail & Related papers (2022-07-25T02:20:21Z) - Multi-View Depth Estimation by Fusing Single-View Depth Probability with
Multi-View Geometry [25.003116148843525]
We propose MaGNet, a framework for fusing single-view depth probability with multi-view geometry.
MaGNet achieves state-of-the-art performance on ScanNet, 7-Scenes and KITTI.
arXiv Detail & Related papers (2021-12-15T14:56:53Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Deep Photometric Stereo for Non-Lambertian Surfaces [89.05501463107673]
We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
arXiv Detail & Related papers (2020-07-26T15:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.