Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo
- URL: http://arxiv.org/abs/2110.05594v1
- Date: Mon, 11 Oct 2021 20:20:03 GMT
- Title: Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo
- Authors: Berk Kaya, Suryansh Kumar, Francesco Sarno, Vittorio Ferrari, Luc Van
Gool
- Abstract summary: We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
- Score: 103.08512487830669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a modern solution to the multi-view photometric stereo problem
(MVPS). Our work suitably exploits the image formation model in a MVPS
experimental setup to recover the dense 3D reconstruction of an object from
images. We procure the surface orientation using a photometric stereo (PS)
image formation model and blend it with a multi-view neural radiance field
representation to recover the object's surface geometry. Contrary to the
previous multi-staged framework to MVPS, where the position, iso-depth
contours, or orientation measurements are estimated independently and then
fused later, our method is simple to implement and realize. Our method performs
neural rendering of multi-view images while utilizing surface normals estimated
by a deep photometric stereo network. We render the MVPS images by considering
the object's surface normals for each 3D sample point along the viewing
direction rather than explicitly using the density gradient in the volume space
via 3D occupancy information. We optimize the proposed neural radiance field
representation for the MVPS setup efficiently using a fully connected deep
network to recover the 3D geometry of an object. Extensive evaluation on the
DiLiGenT-MV benchmark dataset shows that our method performs better than the
approaches that perform only PS or only multi-view stereo (MVS) and provides
comparable results against the state-of-the-art multi-stage fusion methods.
Related papers
- NPLMV-PS: Neural Point-Light Multi-View Photometric Stereo [32.39157133181186]
We present a novel multi-view photometric stereo (MVPS) method.
Our work differs from the state-of-the-art multi-view PS-NeRF or Supernormal in that we explicitly leverage per-pixel intensity renderings.
Our method is among the first (along with Supernormal) to outperform the classical MVPS approach proposed by the DiLiGenT-MV benchmark.
arXiv Detail & Related papers (2024-05-20T14:26:07Z) - RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction [3.1820300989695833]
This paper introduces a versatile paradigm for integrating multi-view reflectance and normal maps acquired through photometric stereo.
Our approach employs a pixel-wise joint re- parameterization of reflectance and normal, considering them as a vector of radiances rendered under simulated, varying illumination.
It significantly improves the detailed 3D reconstruction of areas with high curvature or low visibility.
arXiv Detail & Related papers (2023-12-02T19:49:27Z) - A Neural Height-Map Approach for the Binocular Photometric Stereo
Problem [36.404880059833324]
binocular photometric stereo (PS) framework has same acquisition speed as single view PS, however significantly improves the quality of the estimated geometry.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV dataset adapted to binocular stereo setup as well as a new binocular photometric stereo dataset - LUCES-ST.
arXiv Detail & Related papers (2023-11-10T09:45:53Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - Multi-View Photometric Stereo Revisited [100.97116470055273]
Multi-view photometric stereo (MVPS) is a preferred method for detailed and precise 3D acquisition of an object from images.
We present a simple, practical approach to MVPS, which works well for isotropic as well as other object material types such as anisotropic and glossy.
The proposed approach shows state-of-the-art results when tested extensively on several benchmark datasets.
arXiv Detail & Related papers (2022-10-14T09:46:15Z) - Uncertainty-Aware Deep Multi-View Photometric Stereo [100.97116470055273]
Photometric stereo (PS) is excellent at recovering high-frequency surface details, whereas multi-view stereo (MVS) can help remove the low-frequency distortion due to PS and retain the global shape.
This paper proposes an approach that can effectively utilize such complementary strengths of PS and MVS.
We estimate per-pixel surface normals and depth using an uncertainty-aware deep-PS network and deep-MVS network, respectively.
arXiv Detail & Related papers (2022-02-26T05:45:52Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.