DeepShaRM: Multi-View Shape and Reflectance Map Recovery Under Unknown
Lighting
- URL: http://arxiv.org/abs/2310.17632v1
- Date: Thu, 26 Oct 2023 17:50:10 GMT
- Title: DeepShaRM: Multi-View Shape and Reflectance Map Recovery Under Unknown
Lighting
- Authors: Kohei Yamashita, Shohei Nobuhara, Ko Nishino
- Abstract summary: We derive a novel multi-view method, DeepShaRM, that achieves state-of-the-art accuracy on this challenging task.
We introduce a novel deep reflectance map estimation network that recovers the camera-view reflectance maps.
A deep shape-from-shading network then updates the geometry estimate expressed with a signed distance function.
- Score: 35.18426818323455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Geometry reconstruction of textureless, non-Lambertian objects under unknown
natural illumination (i.e., in the wild) remains challenging as correspondences
cannot be established and the reflectance cannot be expressed in simple
analytical forms. We derive a novel multi-view method, DeepShaRM, that achieves
state-of-the-art accuracy on this challenging task. Unlike past methods that
formulate this as inverse-rendering, i.e., estimation of reflectance,
illumination, and geometry from images, our key idea is to realize that
reflectance and illumination need not be disentangled and instead estimated as
a compound reflectance map. We introduce a novel deep reflectance map
estimation network that recovers the camera-view reflectance maps from the
surface normals of the current geometry estimate and the input multi-view
images. The network also explicitly estimates per-pixel confidence scores to
handle global light transport effects. A deep shape-from-shading network then
updates the geometry estimate expressed with a signed distance function using
the recovered reflectance maps. By alternating between these two, and, most
important, by bypassing the ill-posed problem of reflectance and illumination
decomposition, the method accurately recovers object geometry in these
challenging settings. Extensive experiments on both synthetic and real-world
data clearly demonstrate its state-of-the-art accuracy.
Related papers
- Planar Reflection-Aware Neural Radiance Fields [32.709468082010126]
We introduce a reflection-aware NeRF that jointly models planar reflectors, such as windows, and explicitly casts reflected rays to capture the source of the high-frequency reflections.
Rendering along the primary ray results in a clean, reflection-free view, while explicitly rendering along the reflected ray allows us to reconstruct highly detailed reflections.
arXiv Detail & Related papers (2024-11-07T18:55:08Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Monocular Identity-Conditioned Facial Reflectance Reconstruction [71.90507628715388]
Existing methods rely on a large amount of light-stage captured data to learn facial reflectance models.
We learn the reflectance prior in image space rather than UV space and present a framework named ID2Reflectance.
Our framework can directly estimate the reflectance maps of a single image while using limited reflectance data for training.
arXiv Detail & Related papers (2024-03-30T09:43:40Z) - Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance [19.20790327389337]
Reflectance bounds the frequency spectrum of illumination in the object appearance.
We introduce the first inverse rendering method, which recovers the attenuated frequency spectrum of an illumination jointly with the reflectance of an object of known geometry.
arXiv Detail & Related papers (2023-12-07T18:50:00Z) - NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from
Multiview Images [44.1333444097976]
We present a neural rendering-based method called NeRO for reconstructing the geometry and the BRDF of reflective objects from multiview images captured in an unknown environment.
arXiv Detail & Related papers (2023-05-27T07:40:07Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Neural Reflectance for Shape Recovery with Shadow Handling [88.67603644930466]
This paper aims at recovering the shape of a scene with unknown, non-Lambertian, and possibly spatially-varying surface materials.
We propose a coordinate-based deep reflectance (multilayer perceptron) to parameterize both the unknown 3D shape and the unknown at every surface point.
This network is able to leverage the observed photometric variance and shadows on the surface, and recover both surface shape and general non-Lambertian reflectance.
arXiv Detail & Related papers (2022-03-24T07:57:20Z) - Multi-view 3D Reconstruction of a Texture-less Smooth Surface of Unknown
Generic Reflectance [86.05191217004415]
Multi-view reconstruction of texture-less objects with unknown surface reflectance is a challenging task.
This paper proposes a simple and robust solution to this problem based on a co-light scanner.
arXiv Detail & Related papers (2021-05-25T01:28:54Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.