Geometry-aware Feature Matching for Large-Scale Structure from Motion
- URL: http://arxiv.org/abs/2409.02310v3
- Date: Wed, 25 Sep 2024 19:15:04 GMT
- Title: Geometry-aware Feature Matching for Large-Scale Structure from Motion
- Authors: Gonglin Chen, Jinsen Wu, Haiwei Chen, Wenbin Teng, Zhiyuan Gao, Andrew Feng, Rongjun Qin, Yajie Zhao,
- Abstract summary: We introduce geometry cues in addition to color cues to fill gaps when there is less overlap in large-scale scenarios.
Our approach ensures that the denser correspondences from detector-free methods are geometrically consistent and more accurate.
It outperforms state-of-the-art feature matching methods on benchmark datasets.
- Score: 10.645087195983201
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Establishing consistent and dense correspondences across multiple images is crucial for Structure from Motion (SfM) systems. Significant view changes, such as air-to-ground with very sparse view overlap, pose an even greater challenge to the correspondence solvers. We present a novel optimization-based approach that significantly enhances existing feature matching methods by introducing geometry cues in addition to color cues. This helps fill gaps when there is less overlap in large-scale scenarios. Our method formulates geometric verification as an optimization problem, guiding feature matching within detector-free methods and using sparse correspondences from detector-based methods as anchor points. By enforcing geometric constraints via the Sampson Distance, our approach ensures that the denser correspondences from detector-free methods are geometrically consistent and more accurate. This hybrid strategy significantly improves correspondence density and accuracy, mitigates multi-view inconsistencies, and leads to notable advancements in camera pose accuracy and point cloud density. It outperforms state-of-the-art feature matching methods on benchmark datasets and enables feature matching in challenging extreme large-scale settings.
Related papers
- A Consistency-Aware Spot-Guided Transformer for Versatile and Hierarchical Point Cloud Registration [9.609585217048664]
We develop a consistency-aware spot-guided Transformer (CAST)
CAST incorporates a spot-guided cross-attention module to avoid interfering with irrelevant areas.
A lightweight fine matching module for both sparse keypoints and dense features can estimate the transformation accurately.
arXiv Detail & Related papers (2024-10-14T08:48:25Z) - SPARE: Symmetrized Point-to-Plane Distance for Robust Non-Rigid Registration [76.40993825836222]
We propose SPARE, a novel formulation that utilizes a symmetrized point-to-plane distance for robust non-rigid registration.
The proposed method greatly improves the accuracy of non-rigid registration problems and maintains relatively high solution efficiency.
arXiv Detail & Related papers (2024-05-30T15:55:04Z) - RGM: A Robust Generalizable Matching Model [49.60975442871967]
We propose a deep model for sparse and dense matching, termed RGM (Robust Generalist Matching)
To narrow the gap between synthetic training samples and real-world scenarios, we build a new, large-scale dataset with sparse correspondence ground truth.
We are able to mix up various dense and sparse matching datasets, significantly improving the training diversity.
arXiv Detail & Related papers (2023-10-18T07:30:08Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - Quantity-Aware Coarse-to-Fine Correspondence for Image-to-Point Cloud
Registration [4.954184310509112]
Image-to-point cloud registration aims to determine the relative camera pose between an RGB image and a reference point cloud.
Matching individual points with pixels can be inherently ambiguous due to modality gaps.
We propose a framework to capture quantity-aware correspondences between local point sets and pixel patches.
arXiv Detail & Related papers (2023-07-14T03:55:54Z) - GraphFit: Learning Multi-scale Graph-Convolutional Representation for
Point Cloud Normal Estimation [31.40738037512243]
We propose a precise and efficient normal estimation method for unstructured 3D point clouds.
We learn graph convolutional feature representation for normal estimation, which emphasizes more local neighborhood geometry.
Our method outperforms competitors with the state-of-the-art accuracy on various benchmark datasets.
arXiv Detail & Related papers (2022-07-23T10:29:26Z) - Self-supervised Geometric Perception [96.89966337518854]
Self-supervised geometric perception is a framework to learn a feature descriptor for correspondence matching without any ground-truth geometric model labels.
We show that SGP achieves state-of-the-art performance that is on-par or superior to the supervised oracles trained using ground-truth labels.
arXiv Detail & Related papers (2021-03-04T15:34:43Z) - Isometric Multi-Shape Matching [50.86135294068138]
Finding correspondences between shapes is a fundamental problem in computer vision and graphics.
While isometries are often studied in shape correspondence problems, they have not been considered explicitly in the multi-matching setting.
We present a suitable optimisation algorithm for solving our formulation and provide a convergence and complexity analysis.
arXiv Detail & Related papers (2020-12-04T15:58:34Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - ProAlignNet : Unsupervised Learning for Progressively Aligning Noisy
Contours [12.791313859673187]
"ProAlignNet" accounts for large scale misalignments and complex transformations between the contour shapes.
It learns by training with a novel loss function which is derived an upperbound of a proximity-sensitive and local shape-dependent similarity metric.
In two real-world applications, the proposed models consistently perform superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-05-23T14:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.