Scalable, Detailed and Mask-Free Universal Photometric Stereo
- URL: http://arxiv.org/abs/2303.15724v1
- Date: Tue, 28 Mar 2023 04:18:01 GMT
- Title: Scalable, Detailed and Mask-Free Universal Photometric Stereo
- Authors: Satoshi Ikehata
- Abstract summary: We introduce SDM-UniPS, a groundbreaking Scalable, Detailed, Mask-free, and Universal Photometric Stereo network.
Our approach can recover astonishingly intricate surface normal maps, rivaling the quality of 3D scanners.
We present a new synthetic training dataset that encompasses a diverse range of shapes, materials, and illumination scenarios.
- Score: 4.822598110892846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce SDM-UniPS, a groundbreaking Scalable, Detailed,
Mask-free, and Universal Photometric Stereo network. Our approach can recover
astonishingly intricate surface normal maps, rivaling the quality of 3D
scanners, even when images are captured under unknown, spatially-varying
lighting conditions in uncontrolled environments. We have extended previous
universal photometric stereo networks to extract spatial-light features,
utilizing all available information in high-resolution input images and
accounting for non-local interactions among surface points. Moreover, we
present a new synthetic training dataset that encompasses a diverse range of
shapes, materials, and illumination scenarios found in real-world scenes.
Through extensive evaluation, we demonstrate that our method not only surpasses
calibrated, lighting-specific techniques on public benchmarks, but also excels
with a significantly smaller number of input images even without object masks.
Related papers
- RMAFF-PSN: A Residual Multi-Scale Attention Feature Fusion Photometric Stereo Network [37.759675702107586]
Predicting accurate maps of objects from two-dimensional images in regions of complex structure spatial material variations is challenging.
We propose a method of calibrated feature information from different resolution stages and scales of the image.
This approach preserves more physical information, such as texture and geometry of the object in complex regions.
arXiv Detail & Related papers (2024-04-11T14:05:37Z) - Holistic Inverse Rendering of Complex Facade via Aerial 3D Scanning [38.72679977945778]
We use multi-view aerial images to reconstruct the geometry, lighting, and material of facades using neural signed distance fields (SDFs)
The experiment demonstrates the superior quality of our method on facade holistic inverse rendering, novel view synthesis, and scene editing compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-11-20T15:03:56Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - DiFT: Differentiable Differential Feature Transform for Multi-View
Stereo [16.47413993267985]
We learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view.
These low-level features can be directly fed to any existing multi-view stereo technique for enhanced 3D reconstruction.
arXiv Detail & Related papers (2022-03-16T07:12:46Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Learning Efficient Photometric Feature Transform for Multi-view Stereo [37.26574529243778]
We learn to convert the perpixel photometric information at each view into spatially distinctive and view-invariant low-level features.
Our framework automatically adapts to and makes efficient use of the geometric information available in different forms of input data.
arXiv Detail & Related papers (2021-03-27T02:53:15Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Deep Photometric Stereo for Non-Lambertian Surfaces [89.05501463107673]
We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
arXiv Detail & Related papers (2020-07-26T15:20:53Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.