S2DNet: Learning Accurate Correspondences for Sparse-to-Dense Feature
Matching
- URL: http://arxiv.org/abs/2004.01673v1
- Date: Fri, 3 Apr 2020 17:04:34 GMT
- Title: S2DNet: Learning Accurate Correspondences for Sparse-to-Dense Feature
Matching
- Authors: Hugo Germain, Guillaume Bourmaud, Vincent Lepetit
- Abstract summary: S2DNet is a novel feature matching pipeline designed and trained to efficiently establish robust and accurate correspondences.
We show that S2DNet achieves state-of-the-art results on the HPatches benchmark, as well as on several long-term visual localization datasets.
- Score: 36.48376198922595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Establishing robust and accurate correspondences is a fundamental backbone to
many computer vision algorithms. While recent learning-based feature matching
methods have shown promising results in providing robust correspondences under
challenging conditions, they are often limited in terms of precision. In this
paper, we introduce S2DNet, a novel feature matching pipeline, designed and
trained to efficiently establish both robust and accurate correspondences. By
leveraging a sparse-to-dense matching paradigm, we cast the correspondence
learning problem as a supervised classification task to learn to output highly
peaked correspondence maps. We show that S2DNet achieves state-of-the-art
results on the HPatches benchmark, as well as on several long-term visual
localization datasets.
Related papers
- GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric Learning [51.677086019209554]
We propose a Generalized Structural Sparse to capture powerful relationships across modalities for pair-wise similarity learning.
The distance metric delicately encapsulates two formats of diagonal and block-diagonal terms.
Experiments on cross-modal and two extra uni-modal retrieval tasks have validated its superiority and flexibility.
arXiv Detail & Related papers (2024-10-20T03:45:50Z) - Match me if you can: Semi-Supervised Semantic Correspondence Learning with Unpaired Images [76.47980643420375]
This paper builds on the hypothesis that there is an inherent data-hungry matter in learning semantic correspondences.
We demonstrate a simple machine annotator reliably enriches paired key points via machine supervision.
Our models surpass current state-of-the-art models on semantic correspondence learning benchmarks like SPair-71k, PF-PASCAL, and PF-WILLOW.
arXiv Detail & Related papers (2023-11-30T13:22:15Z) - Q-REG: End-to-End Trainable Point Cloud Registration with Surface
Curvature [81.25511385257344]
We present a novel solution, Q-REG, which utilizes rich geometric information to estimate the rigid pose from a single correspondence.
Q-REG allows to formalize the robust estimation as an exhaustive search, hence enabling end-to-end training.
We demonstrate in the experiments that Q-REG is agnostic to the correspondence matching method and provides consistent improvement both when used only in inference and in end-to-end training.
arXiv Detail & Related papers (2023-09-27T20:58:53Z) - D2Match: Leveraging Deep Learning and Degeneracy for Subgraph Matching [18.53692718028551]
Subgraph matching is a fundamental building block for graph-based applications.
We develop D2Match by leveraging the efficiency of Deep learning and Degeneracy for subgraph matching.
arXiv Detail & Related papers (2023-06-10T08:35:00Z) - NCP: Neural Correspondence Prior for Effective Unsupervised Shape
Matching [31.61255365182462]
We present Neural Correspondence Prior (NCP), a new paradigm for computing correspondences between 3D shapes.
Our approach is fully unsupervised and can lead to high-quality correspondences even in challenging cases.
We show that NCP is data-efficient, fast, and state-of-the-art results on many tasks.
arXiv Detail & Related papers (2023-01-14T07:22:18Z) - SRFeat: Learning Locally Accurate and Globally Consistent Non-Rigid
Shape Correspondence [36.44119664239748]
We present a learning-based framework that combines the local accuracy of contrastive learning with the global consistency of geometric approaches.
Our framework is general and is applicable to local feature learning in both the 3D and 2D domains.
arXiv Detail & Related papers (2022-09-16T09:11:12Z) - ABCNet v2: Adaptive Bezier-Curve Network for Real-time End-to-end Text
Spotting [108.93803186429017]
End-to-end text-spotting aims to integrate detection and recognition in a unified framework.
Here, we tackle end-to-end text spotting by presenting Adaptive Bezier Curve Network v2 (ABCNet v2)
Our main contributions are four-fold: 1) For the first time, we adaptively fit arbitrarily-shaped text by a parameterized Bezier curve, which, compared with segmentation-based methods, can not only provide structured output but also controllable representation.
Comprehensive experiments conducted on various bilingual (English and Chinese) benchmark datasets demonstrate that ABCNet v2 can achieve state-of-the
arXiv Detail & Related papers (2021-05-08T07:46:55Z) - RPM-Net: Robust Point Matching using Learned Features [79.52112840465558]
RPM-Net is a less sensitive and more robust deep learning-based approach for rigid point cloud registration.
Unlike some existing methods, our RPM-Net handles missing correspondences and point clouds with partial visibility.
arXiv Detail & Related papers (2020-03-30T13:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.