GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture
Models
- URL: http://arxiv.org/abs/2006.13670v2
- Date: Mon, 5 Oct 2020 11:07:53 GMT
- Title: GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture
Models
- Authors: Huaiyang Huang, Haoyang Ye, Yuxiang Sun, Ming Liu
- Abstract summary: We present a method that tracks a camera in a prior map modelled by the Gaussian Mixture Model (GMM)
With the pose estimated by the front-end initially, the local visual observations and map components are associated efficiently.
We show how our system can provide a centimeter-level localization accuracy with only trivial computational overhead.
- Score: 23.72910988500612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incorporating prior structure information into the visual state estimation
could generally improve the localization performance. In this letter, we aim to
address the paradox between accuracy and efficiency in coupling visual factors
with structure constraints. To this end, we present a cross-modality method
that tracks a camera in a prior map modelled by the Gaussian Mixture Model
(GMM). With the pose estimated by the front-end initially, the local visual
observations and map components are associated efficiently, and the visual
structure from the triangulation is refined simultaneously. By introducing the
hybrid structure factors into the joint optimization, the camera poses are
bundle-adjusted with the local visual structure. By evaluating our complete
system, namely GMMLoc, on the public dataset, we show how our system can
provide a centimeter-level localization accuracy with only trivial
computational overhead. In addition, the comparative studies with the
state-of-the-art vision-dominant state estimators demonstrate the competitive
performance of our method.
Related papers
- OSMLoc: Single Image-Based Visual Localization in OpenStreetMap with Geometric and Semantic Guidances [11.085165252259042]
OSMLoc is a brain-inspired single-image visual localization method with semantic and geometric guidance to improve accuracy, robustness, and generalization ability.
To validate the proposed OSMLoc, we collect a worldwide cross-area and cross-condition (CC) benchmark for extensive evaluation.
arXiv Detail & Related papers (2024-11-13T14:59:00Z) - Simultaneous Identification of Sparse Structures and Communities in Heterogeneous Graphical Models [8.54401530955314]
We introduce a novel decomposition of the underlying graphical structure into a sparse part and low-rank diagonal blocks.
We propose a three-stage estimation procedure with a fast and efficient algorithm for the identification of the sparse structure and communities.
arXiv Detail & Related papers (2024-05-16T06:38:28Z) - Efficient Multi-View Graph Clustering with Local and Global Structure
Preservation [59.49018175496533]
We propose a novel anchor-based multi-view graph clustering framework termed Efficient Multi-View Graph Clustering with Local and Global Structure Preservation (EMVGC-LG)
Specifically, EMVGC-LG jointly optimize anchor construction and graph learning to enhance the clustering quality.
In addition, EMVGC-LG inherits the linear complexity of existing AMVGC methods respecting the sample number.
arXiv Detail & Related papers (2023-08-31T12:12:30Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Tight-Integration of Feature-Based Relocalization in Monocular Direct
Visual Odometry [49.89611704653707]
We propose a framework for integrating map-based relocalization into online visual odometry.
We integrate image features into Direct Sparse Odometry (DSO) and rely on feature matching to associate online visual odometry with a previously built map.
arXiv Detail & Related papers (2021-02-01T21:41:05Z) - Geometric Structure Aided Visual Inertial Localization [24.42071242531681]
We present a complete visual localization system based on a hybrid map representation to reduce the computational cost and increase the positioning accuracy.
For batch optimization, instead of using visual factors, we develop a module to estimate a pose prior to the instant localization results.
The experimental results on the EuRoC MAV dataset demonstrate a competitive performance compared to the state of the arts.
arXiv Detail & Related papers (2020-11-09T03:48:39Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.