Improving Feature-based Visual Localization by Geometry-Aided Matching
- URL: http://arxiv.org/abs/2211.08712v1
- Date: Wed, 16 Nov 2022 07:02:12 GMT
- Title: Improving Feature-based Visual Localization by Geometry-Aided Matching
- Authors: Hailin Yu, Youji Feng, Weicai Ye, Mingxuan Jiang, Hujun Bao, Guofeng
Zhang
- Abstract summary: We introduce a novel 2D-3D matching method, Geometry-Aided Matching (GAM), which uses both appearance information and geometric context to improve 2D-3D feature matching.
GAM can greatly strengthen the recall of 2D-3D matches while maintaining high precision.
Our proposed localization method achieves state-of-the-art results on multiple visual localization datasets.
- Score: 21.1967752160412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature matching is an essential step in visual localization, where the
accuracy of camera pose is mainly determined by the established 2D-3D
correspondence. Due to the noise, solving the camera pose accurately requires a
sufficient number of well-distributed 2D-3D correspondences. Existing 2D-3D
feature matching is typically achieved by finding the nearest neighbors in the
feature space, and then removing the outliers by some hand-crafted heuristics.
However, this may lead to a large number of potentially true matches being
missed or the established correct matches being filtered out. In this work, we
introduce a novel 2D-3D matching method, Geometry-Aided Matching (GAM), which
uses both appearance information and geometric context to improve 2D-3D feature
matching. GAM can greatly strengthen the recall of 2D-3D matches while
maintaining high precision. We insert GAM into a hierarchical visual
localization pipeline and show that GAM can effectively improve the robustness
and accuracy of localization. Extensive experiments show that GAM can find more
correct matches than hand-crafted heuristics and learning baselines. Our
proposed localization method achieves state-of-the-art results on multiple
visual localization datasets. Experiments on Cambridge Landmarks dataset show
that our method outperforms the existing state-of-the-art methods and is six
times faster than the top-performed method.
Related papers
- Grounding Image Matching in 3D with MASt3R [8.14650201701567]
We propose to cast matching as a 3D task with DUSt3R, a powerful 3D reconstruction framework based on Transformers.
We propose to augment the DUSt3R network with a new head that outputs dense local features, trained with an additional matching loss.
Our approach, coined MASt3R, significantly outperforms the state of the art on multiple matching tasks.
arXiv Detail & Related papers (2024-06-14T06:46:30Z) - Learning to Produce Semi-dense Correspondences for Visual Localization [11.415451542216559]
This study addresses the challenge of performing visual localization in demanding conditions such as night-time scenarios, adverse weather, and seasonal changes.
We propose a novel method that extracts reliable semi-dense 2D-3D matching points based on dense keypoint matches.
The network utilizes both geometric and visual cues to effectively infer 3D coordinates for unobserved keypoints from the observed ones.
arXiv Detail & Related papers (2024-02-13T10:40:10Z) - EP2P-Loc: End-to-End 3D Point to 2D Pixel Localization for Large-Scale
Visual Localization [44.05930316729542]
We propose EP2P-Loc, a novel large-scale visual localization method for 3D point clouds.
To increase the number of inliers, we propose a simple algorithm to remove invisible 3D points in the image.
For the first time in this task, we employ a differentiable for end-to-end training.
arXiv Detail & Related papers (2023-09-14T07:06:36Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - LFM-3D: Learnable Feature Matching Across Wide Baselines Using 3D
Signals [9.201550006194994]
Learnable matchers often underperform when there exists only small regions of co-visibility between image pairs.
We propose LFM-3D, a Learnable Feature Matching framework that uses models based on graph neural networks.
We show that the resulting improved correspondences lead to much higher relative posing accuracy for in-the-wild image pairs.
arXiv Detail & Related papers (2023-03-22T17:46:27Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Multi-initialization Optimization Network for Accurate 3D Human Pose and
Shape Estimation [75.44912541912252]
We propose a three-stage framework named Multi-Initialization Optimization Network (MION)
In the first stage, we strategically select different coarse 3D reconstruction candidates which are compatible with the 2D keypoints of input sample.
In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.
Finally, a Consistency Estimation Network (CEN) is proposed to find the best result from mutiple candidates by evaluating if the visual evidence in RGB image matches a given 3D reconstruction.
arXiv Detail & Related papers (2021-12-24T02:43:58Z) - Soft Expectation and Deep Maximization for Image Feature Detection [68.8204255655161]
We propose SEDM, an iterative semi-supervised learning process that flips the question and first looks for repeatable 3D points, then trains a detector to localize them in image space.
Our results show that this new model trained using SEDM is able to better localize the underlying 3D points in a scene.
arXiv Detail & Related papers (2021-04-21T00:35:32Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - Learning 2D-3D Correspondences To Solve The Blind Perspective-n-Point
Problem [98.92148855291363]
This paper proposes a deep CNN model which simultaneously solves for both 6-DoF absolute camera pose 2D--3D correspondences.
Tests on both real and simulated data have shown that our method substantially outperforms existing approaches.
arXiv Detail & Related papers (2020-03-15T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.