SRFeat: Learning Locally Accurate and Globally Consistent Non-Rigid
Shape Correspondence
- URL: http://arxiv.org/abs/2209.07806v1
- Date: Fri, 16 Sep 2022 09:11:12 GMT
- Title: SRFeat: Learning Locally Accurate and Globally Consistent Non-Rigid
Shape Correspondence
- Authors: Lei Li, Souhaib Attaiki, Maks Ovsjanikov
- Abstract summary: We present a learning-based framework that combines the local accuracy of contrastive learning with the global consistency of geometric approaches.
Our framework is general and is applicable to local feature learning in both the 3D and 2D domains.
- Score: 36.44119664239748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present a novel learning-based framework that combines the
local accuracy of contrastive learning with the global consistency of geometric
approaches, for robust non-rigid matching. We first observe that while
contrastive learning can lead to powerful point-wise features, the learned
correspondences commonly lack smoothness and consistency, owing to the purely
combinatorial nature of the standard contrastive losses. To overcome this
limitation we propose to boost contrastive feature learning with two types of
smoothness regularization that inject geometric information into correspondence
learning. With this novel combination in hand, the resulting features are both
highly discriminative across individual points, and, at the same time, lead to
robust and consistent correspondences, through simple proximity queries. Our
framework is general and is applicable to local feature learning in both the 3D
and 2D domains. We demonstrate the superiority of our approach through
extensive experiments on a wide range of challenging matching benchmarks,
including 3D non-rigid shape correspondence and 2D image keypoint matching.
Related papers
- GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric Learning [51.677086019209554]
We propose a Generalized Structural Sparse to capture powerful relationships across modalities for pair-wise similarity learning.
The distance metric delicately encapsulates two formats of diagonal and block-diagonal terms.
Experiments on cross-modal and two extra uni-modal retrieval tasks have validated its superiority and flexibility.
arXiv Detail & Related papers (2024-10-20T03:45:50Z) - Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.
We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.
We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - Q-REG: End-to-End Trainable Point Cloud Registration with Surface
Curvature [81.25511385257344]
We present a novel solution, Q-REG, which utilizes rich geometric information to estimate the rigid pose from a single correspondence.
Q-REG allows to formalize the robust estimation as an exhaustive search, hence enabling end-to-end training.
We demonstrate in the experiments that Q-REG is agnostic to the correspondence matching method and provides consistent improvement both when used only in inference and in end-to-end training.
arXiv Detail & Related papers (2023-09-27T20:58:53Z) - Noisy-Correspondence Learning for Text-to-Image Person Re-identification [50.07634676709067]
We propose a novel Robust Dual Embedding method (RDE) to learn robust visual-semantic associations even with noisy correspondences.
Our method achieves state-of-the-art results both with and without synthetic noisy correspondences on three datasets.
arXiv Detail & Related papers (2023-08-19T05:34:13Z) - Explicit Correspondence Matching for Generalizable Neural Radiance
Fields [49.49773108695526]
We present a new NeRF method that is able to generalize to new unseen scenarios and perform novel view synthesis with as few as two source views.
The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views.
Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density.
arXiv Detail & Related papers (2023-04-24T17:46:01Z) - LFM-3D: Learnable Feature Matching Across Wide Baselines Using 3D
Signals [9.201550006194994]
Learnable matchers often underperform when there exists only small regions of co-visibility between image pairs.
We propose LFM-3D, a Learnable Feature Matching framework that uses models based on graph neural networks.
We show that the resulting improved correspondences lead to much higher relative posing accuracy for in-the-wild image pairs.
arXiv Detail & Related papers (2023-03-22T17:46:27Z) - DDM-NET: End-to-end learning of keypoint feature Detection, Description
and Matching for 3D localization [34.66510265193038]
We propose an end-to-end framework that jointly learns keypoint detection, descriptor representation and cross-frame matching.
We design a self-supervised image warping correspondence loss for both feature detection and matching.
We also propose a new loss to robustly handle both definite inlier/outlier matches and less-certain matches.
arXiv Detail & Related papers (2022-12-08T21:43:56Z) - PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency [38.93610732090426]
We present PointDSC, a novel deep neural network that explicitly incorporates spatial consistency for pruning outlier correspondences.
Our method outperforms the state-of-the-art hand-crafted and learning-based outlier rejection approaches on several real-world datasets.
arXiv Detail & Related papers (2021-03-09T14:56:08Z) - S2DNet: Learning Accurate Correspondences for Sparse-to-Dense Feature
Matching [36.48376198922595]
S2DNet is a novel feature matching pipeline designed and trained to efficiently establish robust and accurate correspondences.
We show that S2DNet achieves state-of-the-art results on the HPatches benchmark, as well as on several long-term visual localization datasets.
arXiv Detail & Related papers (2020-04-03T17:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.