DeepSim-Nets: Deep Similarity Networks for Stereo Image Matching
- URL: http://arxiv.org/abs/2304.08056v1
- Date: Mon, 17 Apr 2023 08:15:47 GMT
- Title: DeepSim-Nets: Deep Similarity Networks for Stereo Image Matching
- Authors: Mohamed Ali Chebbi, Ewelina Rupnik, Marc Pierrot-Deseilligny, Paul
Lopes
- Abstract summary: We present three multi-scale similarity learning architectures, or DeepSim networks.
These models learn pixel-level matching with a contrastive loss and are agnostic to the geometry of the considered scene.
We establish a middle ground between hybrid and end-to-end approaches by learning to densely allocate all corresponding pixels of an epipolar pair at once.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present three multi-scale similarity learning architectures, or DeepSim
networks. These models learn pixel-level matching with a contrastive loss and
are agnostic to the geometry of the considered scene. We establish a middle
ground between hybrid and end-to-end approaches by learning to densely allocate
all corresponding pixels of an epipolar pair at once. Our features are learnt
on large image tiles to be expressive and capture the scene's wider context. We
also demonstrate that curated sample mining can enhance the overall robustness
of the predicted similarities and improve the performance on radiometrically
homogeneous areas. We run experiments on aerial and satellite datasets. Our
DeepSim-Nets outperform the baseline hybrid approaches and generalize better to
unseen scene geometries than end-to-end methods. Our flexible architecture can
be readily adopted in standard multi-resolution image matching pipelines.
Related papers
- RGM: A Robust Generalizable Matching Model [49.60975442871967]
We propose a deep model for sparse and dense matching, termed RGM (Robust Generalist Matching)
To narrow the gap between synthetic training samples and real-world scenarios, we build a new, large-scale dataset with sparse correspondence ground truth.
We are able to mix up various dense and sparse matching datasets, significantly improving the training diversity.
arXiv Detail & Related papers (2023-10-18T07:30:08Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Multi-Frame Self-Supervised Depth with Transformers [33.00363651105475]
We propose a novel transformer architecture for cost volume generation.
We use depth-discretized epipolar sampling to select matching candidates.
We refine predictions through a series of self- and cross-attention layers.
arXiv Detail & Related papers (2022-04-15T19:04:57Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Multi-scale Matching Networks for Semantic Correspondence [38.904735120815346]
The proposed method achieves state-of-the-art performance on three popular benchmarks with high computational efficiency.
Our multi-scale matching network can be trained end-to-end easily with few additional learnable parameters.
arXiv Detail & Related papers (2021-07-31T10:57:24Z) - ACORN: Adaptive Coordinate Networks for Neural Scene Representation [40.04760307540698]
Current neural representations fail to accurately represent images at resolutions greater than a megapixel or 3D scenes with more than a few hundred thousand polygons.
We introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference.
We demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio.
arXiv Detail & Related papers (2021-05-06T16:21:38Z) - Monocular Depth Parameterizing Networks [15.791732557395552]
We propose a network structure that provides a parameterization of a set of depth maps with feasible shapes.
This allows us to search the shapes for a photo consistent solution with respect to other images.
Our experimental evaluation shows that our method generates more accurate depth maps and generalizes better than competing state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-21T13:02:41Z) - Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks [87.50632573601283]
We present a novel method for multi-view depth estimation from a single video.
Our method achieves temporally coherent depth estimation results by using a novel Epipolar Spatio-Temporal (EST) transformer.
To reduce the computational cost, inspired by recent Mixture-of-Experts models, we design a compact hybrid network.
arXiv Detail & Related papers (2020-11-26T04:04:21Z) - Recursive Multi-model Complementary Deep Fusion forRobust Salient Object
Detection via Parallel Sub Networks [62.26677215668959]
Fully convolutional networks have shown outstanding performance in the salient object detection (SOD) field.
This paper proposes a wider'' network architecture which consists of parallel sub networks with totally different network architectures.
Experiments on several famous benchmarks clearly demonstrate the superior performance, good generalization, and powerful learning ability of the proposed wider framework.
arXiv Detail & Related papers (2020-08-07T10:39:11Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.