Persistent Map Saving for Visual Localization for Autonomous Vehicles:
An ORB-SLAM Extension
- URL: http://arxiv.org/abs/2005.07429v1
- Date: Fri, 15 May 2020 09:20:31 GMT
- Title: Persistent Map Saving for Visual Localization for Autonomous Vehicles:
An ORB-SLAM Extension
- Authors: Felix Nobis, Odysseas Papanikolaou, Johannes Betz and Markus Lienkamp
- Abstract summary: We make use of a stereo camera sensor in order to perceive the environment and create the map.
We evaluate the localization accuracy for scenes of the KITTI dataset against the built up SLAM map.
We show that the relative translation error of the localization stays under 1% for a vehicle travelling at an average longitudinal speed of 36 m/s.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electric vhicles and autonomous driving dominate current research efforts in
the automotive sector. The two topics go hand in hand in terms of enabling
safer and more environmentally friendly driving. One fundamental building block
of an autonomous vehicle is the ability to build a map of the environment and
localize itself on such a map. In this paper, we make use of a stereo camera
sensor in order to perceive the environment and create the map. With live
Simultaneous Localization and Mapping (SLAM), there is a risk of
mislocalization, since no ground truth map is used as a reference and errors
accumulate over time. Therefore, we first build up and save a map of visual
features of the environment at low driving speeds with our extension to the
ORB-SLAM\,2 package. In a second run, we reload the map and then localize on
the previously built-up map. Loading and localizing on a previously built map
can improve the continuous localization accuracy for autonomous vehicles in
comparison to a full SLAM. This map saving feature is missing in the original
ORB-SLAM\,2 implementation.
We evaluate the localization accuracy for scenes of the KITTI dataset against
the built up SLAM map. Furthermore, we test the localization on data recorded
with our own small scale electric model car. We show that the relative
translation error of the localization stays under 1\% for a vehicle travelling
at an average longitudinal speed of 36 m/s in a feature-rich environment. The
localization mode contributes to a better localization accuracy and lower
computational load compared to a full SLAM. The source code of our contribution
to the ORB-SLAM2 will be made public at:
https://github.com/TUMFTM/orbslam-map-saving-extension.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - MapLocNet: Coarse-to-Fine Feature Registration for Visual Re-Localization in Navigation Maps [8.373285397029884]
Traditional localization approaches rely on high-definition (HD) maps, which consist of precisely annotated landmarks.
We propose a novel transformer-based neural re-localization method, inspired by image registration.
Our method significantly outperforms the current state-of-the-art OrienterNet on both the nuScenes and Argoverse datasets.
arXiv Detail & Related papers (2024-07-11T14:51:18Z) - Online Map Vectorization for Autonomous Driving: A Rasterization
Perspective [58.71769343511168]
We introduce a newization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios.
We also propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiableization to preciseized outputs and then performs geometry-aware supervision on HD maps.
arXiv Detail & Related papers (2023-06-18T08:51:14Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - A Survey on Visual Map Localization Using LiDARs and Cameras [0.0]
We define visual map localization as a two-stage process.
At the stage of place recognition, the initial position of the vehicle in the map is determined by comparing the visual sensor output with a set of geo-tagged map regions of interest.
At the stage of map metric localization, the vehicle is tracked while it moves across the map by continuously aligning the visual sensors' output with the current area of the map that is being traversed.
arXiv Detail & Related papers (2022-08-05T20:11:18Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - RoadMap: A Light-Weight Semantic Map for Visual Localization towards
Autonomous Driving [10.218935873715413]
We propose a light-weight localization solution, which relies on low-cost cameras and compact visual semantic maps.
The map is easily produced and updated by sensor-rich vehicles in a crowd-sourced way.
We validate the performance of the proposed map in real-world experiments and compare it against other algorithms.
arXiv Detail & Related papers (2021-06-04T14:55:10Z) - MP3: A Unified Model to Map, Perceive, Predict and Plan [84.07678019017644]
MP3 is an end-to-end approach to mapless driving where the input is raw sensor data and a high-level command.
We show that our approach is significantly safer, more comfortable, and can follow commands better than the baselines in challenging long-term closed-loop simulations.
arXiv Detail & Related papers (2021-01-18T00:09:30Z) - What is the Best Grid-Map for Self-Driving Cars Localization? An
Evaluation under Diverse Types of Illumination, Traffic, and Environment [10.64191129882262]
localization of self-driving cars is needed for several tasks such as keeping maps updated, tracking objects, and planning.
Since maintaining and using several maps is computationally expensive, it is important to analyze which type of map is more adequate for each application.
In this work, we provide data for such analysis by comparing the accuracy of a particle filter localization when using occupancy, reflectivity, color, or semantic grid maps.
arXiv Detail & Related papers (2020-09-19T22:02:44Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.