nLMVS-Net: Deep Non-Lambertian Multi-View Stereo
- URL: http://arxiv.org/abs/2207.11876v1
- Date: Mon, 25 Jul 2022 02:20:21 GMT
- Title: nLMVS-Net: Deep Non-Lambertian Multi-View Stereo
- Authors: Kohei Yamashita, Yuto Enyo, Shohei Nobuhara, Ko Nishino
- Abstract summary: We introduce a novel multi-view stereo (MVS) method that can simultaneously recover per-pixel depth but also surface normals.
Our key idea is to formulate MVS as an end-to-end learnable network, which seamlessly integrates radiometric cues to leverage surface normals as view-independent surface features.
- Score: 24.707415091168556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel multi-view stereo (MVS) method that can simultaneously
recover not just per-pixel depth but also surface normals, together with the
reflectance of textureless, complex non-Lambertian surfaces captured under
known but natural illumination. Our key idea is to formulate MVS as an
end-to-end learnable network, which we refer to as nLMVS-Net, that seamlessly
integrates radiometric cues to leverage surface normals as view-independent
surface features for learned cost volume construction and filtering. It first
estimates surface normals as pixel-wise probability densities for each view
with a novel shape-from-shading network. These per-pixel surface normal
densities and the input multi-view images are then input to a novel cost volume
filtering network that learns to recover per-pixel depth and surface normal.
The reflectance is also explicitly estimated by alternating with geometry
reconstruction. Extensive quantitative evaluations on newly established
synthetic and real-world datasets show that nLMVS-Net can robustly and
accurately recover the shape and reflectance of complex objects in natural
settings.
Related papers
- NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Uncertainty-Aware Deep Multi-View Photometric Stereo [100.97116470055273]
Photometric stereo (PS) is excellent at recovering high-frequency surface details, whereas multi-view stereo (MVS) can help remove the low-frequency distortion due to PS and retain the global shape.
This paper proposes an approach that can effectively utilize such complementary strengths of PS and MVS.
We estimate per-pixel surface normals and depth using an uncertainty-aware deep-PS network and deep-MVS network, respectively.
arXiv Detail & Related papers (2022-02-26T05:45:52Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Learning Signed Distance Field for Multi-view Surface Reconstruction [24.090786783370195]
We introduce a novel neural surface reconstruction framework that leverages the knowledge of stereo matching and feature consistency.
We apply a signed distance field (SDF) and a surface light field to represent the scene geometry and appearance respectively.
Our method is able to improve the robustness of geometry estimation and support reconstruction of complex scene topologies.
arXiv Detail & Related papers (2021-08-23T06:23:50Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z) - One Ring to Rule Them All: a simple solution to multi-view
3D-Reconstruction of shapes with unknown BRDF via a small Recurrent ResNet [96.11203962525443]
This paper proposes a simple method which solves an open problem of multi-view 3D-Review for objects with unknown surface materials.
The object can have arbitrary (e.g. non-Lambertian), spatially-varying (or everywhere different) surface reflectances (svBRDF)
Our solution consists of novel-view-synthesis, relighting, material relighting, and shape exchange without additional coding effort.
arXiv Detail & Related papers (2021-04-11T13:39:31Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.