Tightly-Coupled, Speed-aided Monocular Visual-Inertial Localization in Topological Map
- URL: http://arxiv.org/abs/2411.05497v1
- Date: Fri, 08 Nov 2024 11:55:27 GMT
- Title: Tightly-Coupled, Speed-aided Monocular Visual-Inertial Localization in Topological Map
- Authors: Chanuk Yang, Hayeon O, Kunsoo Huh,
- Abstract summary: This paper proposes a novel algorithm for vehicle speed-aided monocular visual-inertial localization using a topological map.
The proposed system aims to address the limitations of existing methods that rely heavily on expensive sensors like GPS and LiDAR.
- Score: 0.7373617024876725
- License:
- Abstract: This paper proposes a novel algorithm for vehicle speed-aided monocular visual-inertial localization using a topological map. The proposed system aims to address the limitations of existing methods that rely heavily on expensive sensors like GPS and LiDAR by leveraging relatively inexpensive camera-based pose estimation. The topological map is generated offline from LiDAR point clouds and includes depth images, intensity images, and corresponding camera poses. This map is then used for real-time localization through correspondence matching between current camera images and the stored topological images. The system employs an Iterated Error State Kalman Filter (IESKF) for optimized pose estimation, incorporating correspondence among images and vehicle speed measurements to enhance accuracy. Experimental results using both open dataset and our collected data in challenging scenario, such as tunnel, demonstrate the proposed algorithm's superior performance in topological map generation and localization tasks.
Related papers
- Weakly-supervised Camera Localization by Ground-to-satellite Image Registration [52.54992898069471]
We propose a weakly supervised learning strategy for ground-to-satellite image registration.
It derives positive and negative satellite images for each ground image.
We also propose a self-supervision strategy for cross-view image relative rotation estimation.
arXiv Detail & Related papers (2024-09-10T12:57:16Z) - 3DGS-ReLoc: 3D Gaussian Splatting for Map Representation and Visual ReLocalization [13.868258945395326]
This paper presents a novel system designed for 3D mapping and visual relocalization using 3D Gaussian Splatting.
Our proposed method uses LiDAR and camera data to create accurate and visually plausible representations of the environment.
arXiv Detail & Related papers (2024-03-17T23:06:12Z) - Colmap-PCD: An Open-source Tool for Fine Image-to-point cloud
Registration [7.860297360803415]
We propose a novel cost-effective reconstruction pipeline that utilizes a pre-established LiDAR map as a fixed constraint.
Our method is the first to register images onto the point cloud map without requiring synchronous capture of camera and LiDAR data.
arXiv Detail & Related papers (2023-10-09T08:09:15Z) - Monocular BEV Perception of Road Scenes via Front-to-Top View Projection [57.19891435386843]
We present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird's-eye view.
Our model runs at 25 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
arXiv Detail & Related papers (2022-11-15T13:52:41Z) - Cross-View Image Sequence Geo-localization [6.555961698070275]
Cross-view geo-localization aims to estimate the GPS location of a query ground-view image.
Recent approaches use panoramic ground-view images to increase the range of visibility.
We present the first cross-view geo-localization method that works on a sequence of limited Field-Of-View images.
arXiv Detail & Related papers (2022-10-25T19:46:18Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Accurate 3-DoF Camera Geo-Localization via Ground-to-Satellite Image
Matching [102.39635336450262]
We address the problem of ground-to-satellite image geo-localization by matching a query image captured at the ground level against a large-scale database with geotagged satellite images.
Our new method is able to achieve the fine-grained location of a query image, up to pixel size precision of the satellite image.
arXiv Detail & Related papers (2022-03-26T20:10:38Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - Coarse-to-fine Semantic Localization with HD Map for Autonomous Driving
in Structural Scenes [1.1024591739346292]
We propose a cost-effective vehicle localization system with HD map for autonomous driving using cameras as primary sensors.
We formulate vision-based localization as a data association problem that maps visual semantics to landmarks in HD map.
We evaluate our method on two datasets and demonstrate that the proposed approach yields promising localization results in different driving scenarios.
arXiv Detail & Related papers (2021-07-06T11:58:55Z) - Gaussian Process Gradient Maps for Loop-Closure Detection in
Unstructured Planetary Environments [17.276441789710574]
The ability to recognize previously mapped locations is an essential feature for autonomous systems.
Unstructured planetary-like environments pose a major challenge to these systems due to the similarity of the terrain.
This paper presents a method to solve the loop closure problem using only spatial information.
arXiv Detail & Related papers (2020-09-01T04:41:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.