Learning Efficient Photometric Feature Transform for Multi-view Stereo
- URL: http://arxiv.org/abs/2103.14794v1
- Date: Sat, 27 Mar 2021 02:53:15 GMT
- Title: Learning Efficient Photometric Feature Transform for Multi-view Stereo
- Authors: Kaizhang Kang, Cihui Xie, Ruisheng Zhu, Xiaohe Ma, Ping Tan, Hongzhi
Wu and Kun Zhou
- Abstract summary: We learn to convert the perpixel photometric information at each view into spatially distinctive and view-invariant low-level features.
Our framework automatically adapts to and makes efficient use of the geometric information available in different forms of input data.
- Score: 37.26574529243778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel framework to learn to convert the perpixel photometric
information at each view into spatially distinctive and view-invariant
low-level features, which can be plugged into existing multi-view stereo
pipeline for enhanced 3D reconstruction. Both the illumination conditions
during acquisition and the subsequent per-pixel feature transform can be
jointly optimized in a differentiable fashion. Our framework automatically
adapts to and makes efficient use of the geometric information available in
different forms of input data. High-quality 3D reconstructions of a variety of
challenging objects are demonstrated on the data captured with an illumination
multiplexing device, as well as a point light. Our results compare favorably
with state-of-the-art techniques.
Related papers
- Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - Learning Photometric Feature Transform for Free-form Object Scan [34.61673205691415]
We propose a novel framework to automatically learn to aggregate and transform photometric measurements from unstructured views.
We build a system to reconstruct the geometry and anisotropic reflectance of a variety of challenging objects from hand-held scans.
Results are validated against reconstructions from a professional 3D scanner and photographs, and compare favorably with state-of-the-art techniques.
arXiv Detail & Related papers (2023-08-07T11:34:27Z) - Towards Scalable Multi-View Reconstruction of Geometry and Materials [27.660389147094715]
We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes.
The input are high-resolution RGBD images captured by a mobile, hand-held capture system with point lights for active illumination.
arXiv Detail & Related papers (2023-06-06T15:07:39Z) - DiFT: Differentiable Differential Feature Transform for Multi-View
Stereo [16.47413993267985]
We learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view.
These low-level features can be directly fed to any existing multi-view stereo technique for enhanced 3D reconstruction.
arXiv Detail & Related papers (2022-03-16T07:12:46Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution
using Dynamic Filters [23.82780431526054]
We introduce a novel learning-based framework to improve the spatial resolution of light fields.
Our reconstructed images also show sharp details and distinct lines in both sub-aperture images and epipolar plane images.
arXiv Detail & Related papers (2020-08-26T09:05:07Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.