DiFT: Differentiable Differential Feature Transform for Multi-View
Stereo
- URL: http://arxiv.org/abs/2203.08435v1
- Date: Wed, 16 Mar 2022 07:12:46 GMT
- Title: DiFT: Differentiable Differential Feature Transform for Multi-View
Stereo
- Authors: Kaizhang Kang, Chong Zeng, Hongzhi Wu, and Kun Zhou
- Abstract summary: We learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view.
These low-level features can be directly fed to any existing multi-view stereo technique for enhanced 3D reconstruction.
- Score: 16.47413993267985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel framework to automatically learn to transform the
differential cues from a stack of images densely captured with a rotational
motion into spatially discriminative and view-invariant per-pixel features at
each view. These low-level features can be directly fed to any existing
multi-view stereo technique for enhanced 3D reconstruction. The lighting
condition during acquisition can also be jointly optimized in a differentiable
fashion. We sample from a dozen of pre-scanned objects with a wide variety of
geometry and reflectance to synthesize a large amount of high-quality training
data. The effectiveness of our features is demonstrated on a number of
challenging objects acquired with a lightstage, comparing favorably with
state-of-the-art techniques. Finally, we explore additional applications of
geometric detail visualization and computational stylization of complex
appearance.
Related papers
- Learning Photometric Feature Transform for Free-form Object Scan [34.61673205691415]
We propose a novel framework to automatically learn to aggregate and transform photometric measurements from unstructured views.
We build a system to reconstruct the geometry and anisotropic reflectance of a variety of challenging objects from hand-held scans.
Results are validated against reconstructions from a professional 3D scanner and photographs, and compare favorably with state-of-the-art techniques.
arXiv Detail & Related papers (2023-08-07T11:34:27Z) - Multi-Spectral Image Stitching via Spatial Graph Reasoning [52.27796682972484]
We propose a spatial graph reasoning based multi-spectral image stitching method.
We embed multi-scale complementary features from the same view position into a set of nodes.
By introducing long-range coherence along spatial and channel dimensions, the complementarity of pixel relations and channel interdependencies aids in the reconstruction of aligned multi-view features.
arXiv Detail & Related papers (2023-07-31T15:04:52Z) - Towards Scalable Multi-View Reconstruction of Geometry and Materials [27.660389147094715]
We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes.
The input are high-resolution RGBD images captured by a mobile, hand-held capture system with point lights for active illumination.
arXiv Detail & Related papers (2023-06-06T15:07:39Z) - Unifying Flow, Stereo and Depth Estimation [121.54066319299261]
We present a unified formulation and model for three motion and 3D perception tasks.
We formulate all three tasks as a unified dense correspondence matching problem.
Our model naturally enables cross-task transfer since the model architecture and parameters are shared across tasks.
arXiv Detail & Related papers (2022-11-10T18:59:54Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Shape and Reflectance Reconstruction in Uncontrolled Environments by
Differentiable Rendering [27.41344744849205]
We propose an efficient method to reconstruct the scene's 3D geometry and reflectance from multi-view photography using conventional hand-held cameras.
Our method also shows superior performance compared to state-of-the-art alternatives in novel view visually synthesis and quantitatively.
arXiv Detail & Related papers (2021-10-25T14:09:10Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Differentiable Stereopsis: Meshes from multiple views using
differentiable rendering [72.25348629612782]
We propose Differentiable Stereopsis, a multi-view stereo approach that reconstructs shape and texture from few input views and noisy cameras.
We pair traditional stereopsis and modern differentiable rendering to build an end-to-end model which predicts textured 3D meshes of objects with varying topologies and shape.
arXiv Detail & Related papers (2021-10-11T17:59:40Z) - DeepMultiCap: Performance Capture of Multiple Characters Using Sparse
Multiview Cameras [63.186486240525554]
DeepMultiCap is a novel method for multi-person performance capture using sparse multi-view cameras.
Our method can capture time varying surface details without the need of using pre-scanned template models.
arXiv Detail & Related papers (2021-05-01T14:32:13Z) - Learning Efficient Photometric Feature Transform for Multi-view Stereo [37.26574529243778]
We learn to convert the perpixel photometric information at each view into spatially distinctive and view-invariant low-level features.
Our framework automatically adapts to and makes efficient use of the geometric information available in different forms of input data.
arXiv Detail & Related papers (2021-03-27T02:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.