A Survey on Visual Map Localization Using LiDARs and Cameras
- URL: http://arxiv.org/abs/2208.03376v1
- Date: Fri, 5 Aug 2022 20:11:18 GMT
- Title: A Survey on Visual Map Localization Using LiDARs and Cameras
- Authors: Elhousni Mahdi and Huang Xinming
- Abstract summary: We define visual map localization as a two-stage process.
At the stage of place recognition, the initial position of the vehicle in the map is determined by comparing the visual sensor output with a set of geo-tagged map regions of interest.
At the stage of map metric localization, the vehicle is tracked while it moves across the map by continuously aligning the visual sensors' output with the current area of the map that is being traversed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the autonomous driving industry is slowly maturing, visual map
localization is quickly becoming the standard approach to localize cars as
accurately as possible. Owing to the rich data returned by visual sensors such
as cameras or LiDARs, researchers are able to build different types of maps
with various levels of details, and use them to achieve high levels of vehicle
localization accuracy and stability in urban environments. Contrary to the
popular SLAM approaches, visual map localization relies on pre-built maps, and
is focused solely on improving the localization accuracy by avoiding error
accumulation or drift. We define visual map localization as a two-stage
process. At the stage of place recognition, the initial position of the vehicle
in the map is determined by comparing the visual sensor output with a set of
geo-tagged map regions of interest. Subsequently, at the stage of map metric
localization, the vehicle is tracked while it moves across the map by
continuously aligning the visual sensors' output with the current area of the
map that is being traversed. In this paper, we survey, discuss and compare the
latest methods for LiDAR based, camera based and cross-modal visual map
localization for both stages, in an effort to highlight the strength and
weakness of each approach.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Online Map Vectorization for Autonomous Driving: A Rasterization
Perspective [58.71769343511168]
We introduce a newization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios.
We also propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiableization to preciseized outputs and then performs geometry-aware supervision on HD maps.
arXiv Detail & Related papers (2023-06-18T08:51:14Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - Coarse-to-fine Semantic Localization with HD Map for Autonomous Driving
in Structural Scenes [1.1024591739346292]
We propose a cost-effective vehicle localization system with HD map for autonomous driving using cameras as primary sensors.
We formulate vision-based localization as a data association problem that maps visual semantics to landmarks in HD map.
We evaluate our method on two datasets and demonstrate that the proposed approach yields promising localization results in different driving scenarios.
arXiv Detail & Related papers (2021-07-06T11:58:55Z) - What is the Best Grid-Map for Self-Driving Cars Localization? An
Evaluation under Diverse Types of Illumination, Traffic, and Environment [10.64191129882262]
localization of self-driving cars is needed for several tasks such as keeping maps updated, tracking objects, and planning.
Since maintaining and using several maps is computationally expensive, it is important to analyze which type of map is more adequate for each application.
In this work, we provide data for such analysis by comparing the accuracy of a particle filter localization when using occupancy, reflectivity, color, or semantic grid maps.
arXiv Detail & Related papers (2020-09-19T22:02:44Z) - Visual Localization for Autonomous Driving: Mapping the Accurate
Location in the City Maze [16.824901952766446]
We propose a novel feature voting technique for visual localization.
In our work, we craft the proposed feature voting method into three state-of-the-art visual localization networks.
Our approach can predict location robustly even in challenging inner-city settings.
arXiv Detail & Related papers (2020-08-13T03:59:34Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z) - Persistent Map Saving for Visual Localization for Autonomous Vehicles:
An ORB-SLAM Extension [0.0]
We make use of a stereo camera sensor in order to perceive the environment and create the map.
We evaluate the localization accuracy for scenes of the KITTI dataset against the built up SLAM map.
We show that the relative translation error of the localization stays under 1% for a vehicle travelling at an average longitudinal speed of 36 m/s.
arXiv Detail & Related papers (2020-05-15T09:20:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.