What is the Best Grid-Map for Self-Driving Cars Localization? An
Evaluation under Diverse Types of Illumination, Traffic, and Environment
- URL: http://arxiv.org/abs/2009.09308v1
- Date: Sat, 19 Sep 2020 22:02:44 GMT
- Title: What is the Best Grid-Map for Self-Driving Cars Localization? An
Evaluation under Diverse Types of Illumination, Traffic, and Environment
- Authors: Filipe Mutz, Thiago Oliveira-Santos, Avelino Forechi, Karin S. Komati,
Claudine Badue, Felipe M. G. Fran\c{c}a, Alberto F. De Souza
- Abstract summary: localization of self-driving cars is needed for several tasks such as keeping maps updated, tracking objects, and planning.
Since maintaining and using several maps is computationally expensive, it is important to analyze which type of map is more adequate for each application.
In this work, we provide data for such analysis by comparing the accuracy of a particle filter localization when using occupancy, reflectivity, color, or semantic grid maps.
- Score: 10.64191129882262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The localization of self-driving cars is needed for several tasks such as
keeping maps updated, tracking objects, and planning. Localization algorithms
often take advantage of maps for estimating the car pose. Since maintaining and
using several maps is computationally expensive, it is important to analyze
which type of map is more adequate for each application. In this work, we
provide data for such analysis by comparing the accuracy of a particle filter
localization when using occupancy, reflectivity, color, or semantic grid maps.
To the best of our knowledge, such evaluation is missing in the literature. For
building semantic and colour grid maps, point clouds from a Light Detection and
Ranging (LiDAR) sensor are fused with images captured by a front-facing camera.
Semantic information is extracted from images with a deep neural network.
Experiments are performed in varied environments, under diverse conditions of
illumination and traffic. Results show that occupancy grid maps lead to more
accurate localization, followed by reflectivity grid maps. In most scenarios,
the localization with semantic grid maps kept the position tracking without
catastrophic losses, but with errors from 2 to 3 times bigger than the
previous. Colour grid maps led to inaccurate and unstable localization even
using a robust metric, the entropy correlation coefficient, for comparing
online data and the map.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Prior Based Online Lane Graph Extraction from Single Onboard Camera
Image [133.68032636906133]
We tackle online estimation of the lane graph from a single onboard camera image.
The prior is extracted from the dataset through a transformer based Wasserstein Autoencoder.
The autoencoder is then used to enhance the initial lane graph estimates.
arXiv Detail & Related papers (2023-07-25T08:58:26Z) - Online Map Vectorization for Autonomous Driving: A Rasterization
Perspective [58.71769343511168]
We introduce a newization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios.
We also propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiableization to preciseized outputs and then performs geometry-aware supervision on HD maps.
arXiv Detail & Related papers (2023-06-18T08:51:14Z) - Sparse Semantic Map-Based Monocular Localization in Traffic Scenes Using
Learned 2D-3D Point-Line Correspondences [29.419138863851526]
Given a query image, the goal is to estimate the camera pose corresponding to the prior map.
Existing approaches rely heavily on dense point descriptors at the feature level to solve the registration problem.
We propose a sparse semantic map-based monocular localization method, which solves 2D-3D registration via a well-designed deep neural network.
arXiv Detail & Related papers (2022-10-10T10:29:07Z) - A Survey on Visual Map Localization Using LiDARs and Cameras [0.0]
We define visual map localization as a two-stage process.
At the stage of place recognition, the initial position of the vehicle in the map is determined by comparing the visual sensor output with a set of geo-tagged map regions of interest.
At the stage of map metric localization, the vehicle is tracked while it moves across the map by continuously aligning the visual sensors' output with the current area of the map that is being traversed.
arXiv Detail & Related papers (2022-08-05T20:11:18Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - RoadMap: A Light-Weight Semantic Map for Visual Localization towards
Autonomous Driving [10.218935873715413]
We propose a light-weight localization solution, which relies on low-cost cameras and compact visual semantic maps.
The map is easily produced and updated by sensor-rich vehicles in a crowd-sourced way.
We validate the performance of the proposed map in real-world experiments and compare it against other algorithms.
arXiv Detail & Related papers (2021-06-04T14:55:10Z) - Radar-based Automotive Localization using Landmarks in a Multimodal
Sensor Graph-based Approach [0.0]
In this paper, we address the problem of localization with automotive-grade radars.
The system uses landmarks and odometry information as an abstraction layer.
A single, semantic landmark map is used and maintained for all sensors.
arXiv Detail & Related papers (2021-04-29T07:35:20Z) - LiveMap: Real-Time Dynamic Map in Automotive Edge Computing [14.195521569220448]
LiveMap is a real-time dynamic map that detects, matches, and tracks objects on the road with crowdsourcing data from connected vehicles in sub-second.
We develop the control plane of LiveMap that allows adaptive offloading of vehicle computations.
We implement LiveMap on a small-scale testbed and develop a large-scale network simulator.
arXiv Detail & Related papers (2020-12-16T15:00:49Z) - Radar-based Dynamic Occupancy Grid Mapping and Object Detection [55.74894405714851]
In recent years, the classical occupancy grid map approach has been extended to dynamic occupancy grid maps.
This paper presents the further development of a previous approach.
The data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied.
arXiv Detail & Related papers (2020-08-09T09:26:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.