Multi-View Photometric Stereo Revisited
- URL: http://arxiv.org/abs/2210.07670v1
- Date: Fri, 14 Oct 2022 09:46:15 GMT
- Title: Multi-View Photometric Stereo Revisited
- Authors: Berk Kaya, Suryansh Kumar, Carlos Oliveira, Vittorio Ferrari, Luc Van
Gool
- Abstract summary: Multi-view photometric stereo (MVPS) is a preferred method for detailed and precise 3D acquisition of an object from images.
We present a simple, practical approach to MVPS, which works well for isotropic as well as other object material types such as anisotropic and glossy.
The proposed approach shows state-of-the-art results when tested extensively on several benchmark datasets.
- Score: 100.97116470055273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view photometric stereo (MVPS) is a preferred method for detailed and
precise 3D acquisition of an object from images. Although popular methods for
MVPS can provide outstanding results, they are often complex to execute and
limited to isotropic material objects. To address such limitations, we present
a simple, practical approach to MVPS, which works well for isotropic as well as
other object material types such as anisotropic and glossy. The proposed
approach in this paper exploits the benefit of uncertainty modeling in a deep
neural network for a reliable fusion of photometric stereo (PS) and multi-view
stereo (MVS) network predictions. Yet, contrary to the recently proposed
state-of-the-art, we introduce neural volume rendering methodology for a
trustworthy fusion of MVS and PS measurements. The advantage of introducing
neural volume rendering is that it helps in the reliable modeling of objects
with diverse material types, where existing MVS methods, PS methods, or both
may fail. Furthermore, it allows us to work on neural 3D shape representation,
which has recently shown outstanding results for many geometric processing
tasks. Our suggested new loss function aims to fits the zero level set of the
implicit neural function using the most certain MVS and PS network predictions
coupled with weighted neural volume rendering cost. The proposed approach shows
state-of-the-art results when tested extensively on several benchmark datasets.
Related papers
- Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit
Surfaces [75.30792581941789]
Neural rendering of implicit surfaces performs well in 3D vision applications.
When only sparse input images are available, output quality drops significantly due to the shape-radiance ambiguity problem.
We propose to regularize neural rendering optimization with an MVS solution.
arXiv Detail & Related papers (2023-03-30T21:10:58Z) - MS-PS: A Multi-Scale Network for Photometric Stereo With a New
Comprehensive Training Dataset [0.0]
Photometric stereo (PS) problem consists in reconstructing the 3D-surface of an object.
We propose a multi-scale architecture for PS which, combined with a new dataset, yields state-of-the-art results.
arXiv Detail & Related papers (2022-11-25T14:01:54Z) - Uncertainty-Aware Deep Multi-View Photometric Stereo [100.97116470055273]
Photometric stereo (PS) is excellent at recovering high-frequency surface details, whereas multi-view stereo (MVS) can help remove the low-frequency distortion due to PS and retain the global shape.
This paper proposes an approach that can effectively utilize such complementary strengths of PS and MVS.
We estimate per-pixel surface normals and depth using an uncertainty-aware deep-PS network and deep-MVS network, respectively.
arXiv Detail & Related papers (2022-02-26T05:45:52Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Point-Based Neural Rendering with Per-View Optimization [5.306819482496464]
We introduce a general approach that is with MVS, but allows further optimization of scene properties in the space of input views.
A key element of our approach is our new differentiable point-based pipeline.
We use these elements together in our neural splatting, that outperforms all previous methods both in quality and speed in almost all scenes we tested.
arXiv Detail & Related papers (2021-09-06T11:19:31Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.