Probabilistic Visual Place Recognition for Hierarchical Localization
- URL: http://arxiv.org/abs/2105.03091v1
- Date: Fri, 7 May 2021 07:39:14 GMT
- Title: Probabilistic Visual Place Recognition for Hierarchical Localization
- Authors: Ming Xu, Niko S\"underhauf, Michael Milford
- Abstract summary: We propose two methods which adapt image retrieval techniques used for visual place recognition to the Bayesian state estimation formulation for localization.
We demonstrate significant improvements to the localization accuracy of the coarse localization stage using our methods, whilst retaining state-of-the-art performance under severe appearance change.
- Score: 22.703331060822862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual localization techniques often comprise a hierarchical localization
pipeline, with a visual place recognition module used as a coarse localizer to
initialize a pose refinement stage. While improving the pose refinement step
has been the focus of much recent research, most work on the coarse
localization stage has focused on improvements like increased invariance to
appearance change, without improving what can be loose error tolerances. In
this letter, we propose two methods which adapt image retrieval techniques used
for visual place recognition to the Bayesian state estimation formulation for
localization. We demonstrate significant improvements to the localization
accuracy of the coarse localization stage using our methods, whilst retaining
state-of-the-art performance under severe appearance change. Using extensive
experimentation on the Oxford RobotCar dataset, results show that our approach
outperforms comparable state-of-the-art methods in terms of precision-recall
performance for localizing image sequences. In addition, our proposed methods
provides the flexibility to contextually scale localization latency in order to
achieve these improvements. The improved initial localization estimate opens up
the possibility of both improved overall localization performance and modified
pose refinement techniques that leverage this improved spatial prior.
Related papers
- SplatLoc: 3D Gaussian Splatting-based Visual Localization for Augmented Reality [50.179377002092416]
We propose an efficient visual localization method capable of high-quality rendering with fewer parameters.
Our method achieves superior or comparable rendering and localization performance to state-of-the-art implicit-based visual localization approaches.
arXiv Detail & Related papers (2024-09-21T08:46:16Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - ImPosIng: Implicit Pose Encoding for Efficient Camera Pose Estimation [2.6808541153140077]
Implicit Pose.
(ImPosing) embeds images and camera poses into a common latent representation with 2 separate neural networks.
By evaluating candidates through the latent space in a hierarchical manner, the camera position and orientation are not directly regressed but refined.
arXiv Detail & Related papers (2022-05-05T13:33:25Z) - Probabilistic Appearance-Invariant Topometric Localization with New
Place Awareness [23.615781318030454]
We present a new topometric localization system which incorporates full 3-dof odometry into the motion model and adds an "off-map" state within the state-estimation framework.
Our approach achieves major performance improvements over both existing and improved state-of-the-art systems.
arXiv Detail & Related papers (2021-07-16T05:01:40Z) - Cross-Descriptor Visual Localization and Mapping [81.16435356103133]
Visual localization and mapping is the key technology underlying the majority of Mixed Reality and robotics systems.
We present three novel scenarios for localization and mapping which require the continuous update of feature representations.
Our data-driven approach is agnostic to the feature descriptor type, has low computational requirements, and scales linearly with the number of description algorithms.
arXiv Detail & Related papers (2020-12-02T18:19:51Z) - Domain-invariant Similarity Activation Map Contrastive Learning for
Retrieval-based Long-term Visual Localization [30.203072945001136]
In this work, a general architecture is first formulated probabilistically to extract domain invariant feature through multi-domain image translation.
And then a novel gradient-weighted similarity activation mapping loss (Grad-SAM) is incorporated for finer localization with high accuracy.
Extensive experiments have been conducted to validate the effectiveness of the proposed approach on the CMUSeasons dataset.
Our performance is on par with or even outperforms the state-of-the-art image-based localization baselines in medium or high precision.
arXiv Detail & Related papers (2020-09-16T14:43:22Z) - Domain Adaptation of Learned Features for Visual Localization [60.6817896667435]
We tackle the problem of visual localization under changing conditions, such as time of day, weather, and seasons.
Recent learned local features based on deep neural networks have shown superior performance over classical hand-crafted local features.
We present a novel and practical approach, where only a few examples are needed to reduce the domain gap.
arXiv Detail & Related papers (2020-08-21T05:17:32Z) - Multi-View Optimization of Local Feature Geometry [70.18863787469805]
We address the problem of refining the geometry of local image features from multiple views without known scene or camera geometry.
Our proposed method naturally complements the traditional feature extraction and matching paradigm.
We show that our method consistently improves the triangulation and camera localization performance for both hand-crafted and learned local features.
arXiv Detail & Related papers (2020-03-18T17:22:11Z) - Features for Ground Texture Based Localization -- A Survey [12.160708336715489]
Ground texture based vehicle localization using feature-based methods is a promising approach to achieve infrastructure-free high-accuracy localization.
We provide the first extensive evaluation of available feature extraction methods for this task, using separately taken image pairs as well as synthetic transformations.
We identify AKAZE, SURF and CenSurE as best performing keypoint detectors, and find pairings of CenSurE with the ORB, BRIEF and LATCH feature descriptors to achieve greatest success rates for incremental localization.
arXiv Detail & Related papers (2020-02-27T07:25:41Z) - Ground Texture Based Localization Using Compact Binary Descriptors [12.160708336715489]
Ground texture based localization is a promising approach to achieve high-accuracy positioning of vehicles.
We present a self-contained method that can be used for global localization as well as for subsequent local localization updates.
arXiv Detail & Related papers (2020-02-25T17:31:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.