RoadMap: A Light-Weight Semantic Map for Visual Localization towards
Autonomous Driving
- URL: http://arxiv.org/abs/2106.02527v1
- Date: Fri, 4 Jun 2021 14:55:10 GMT
- Title: RoadMap: A Light-Weight Semantic Map for Visual Localization towards
Autonomous Driving
- Authors: Tong Qin, Yuxin Zheng, Tongqing Chen, Yilun Chen, and Qing Su
- Abstract summary: We propose a light-weight localization solution, which relies on low-cost cameras and compact visual semantic maps.
The map is easily produced and updated by sensor-rich vehicles in a crowd-sourced way.
We validate the performance of the proposed map in real-world experiments and compare it against other algorithms.
- Score: 10.218935873715413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate localization is of crucial importance for autonomous driving tasks.
Nowadays, we have seen a lot of sensor-rich vehicles (e.g. Robo-taxi) driving
on the street autonomously, which rely on high-accurate sensors (e.g. Lidar and
RTK GPS) and high-resolution map. However, low-cost production cars cannot
afford such high expenses on sensors and maps. How to reduce costs? How do
sensor-rich vehicles benefit low-cost cars? In this paper, we proposed a
light-weight localization solution, which relies on low-cost cameras and
compact visual semantic maps. The map is easily produced and updated by
sensor-rich vehicles in a crowd-sourced way. Specifically, the map consists of
several semantic elements, such as lane line, crosswalk, ground sign, and stop
line on the road surface. We introduce the whole framework of on-vehicle
mapping, on-cloud maintenance, and user-end localization. The map data is
collected and preprocessed on vehicles. Then, the crowd-sourced data is
uploaded to a cloud server. The mass data from multiple vehicles are merged on
the cloud so that the semantic map is updated in time. Finally, the semantic
map is compressed and distributed to production cars, which use this map for
localization. We validate the performance of the proposed map in real-world
experiments and compare it against other algorithms. The average size of the
semantic map is $36$ kb/km. We highlight that this framework is a reliable and
practical localization solution for autonomous driving.
Related papers
- TopoSD: Topology-Enhanced Lane Segment Perception with SDMap Prior [70.84644266024571]
We propose to train a perception model to "see" standard definition maps (SDMaps)
We encode SDMap elements into neural spatial map representations and instance tokens, and then incorporate such complementary features as prior information.
Based on the lane segment representation framework, the model simultaneously predicts lanes, centrelines and their topology.
arXiv Detail & Related papers (2024-11-22T06:13:42Z) - Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation [18.236615392921273]
This paper addresses the challenge of road graph construction for autonomous vehicles.
We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles.
We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network.
arXiv Detail & Related papers (2024-08-03T02:57:37Z) - Augmenting Lane Perception and Topology Understanding with Standard
Definition Navigation Maps [51.24861159115138]
Standard Definition (SD) maps are more affordable and have worldwide coverage, offering a scalable alternative.
We propose a novel framework to integrate SD maps into online map prediction and propose a Transformer-based encoder, SD Map Representations from transFormers.
This enhancement consistently and significantly boosts (by up to 60%) lane detection and topology prediction on current state-of-the-art online map prediction methods.
arXiv Detail & Related papers (2023-11-07T15:42:22Z) - Prior Based Online Lane Graph Extraction from Single Onboard Camera
Image [133.68032636906133]
We tackle online estimation of the lane graph from a single onboard camera image.
The prior is extracted from the dataset through a transformer based Wasserstein Autoencoder.
The autoencoder is then used to enhance the initial lane graph estimates.
arXiv Detail & Related papers (2023-07-25T08:58:26Z) - Online Map Vectorization for Autonomous Driving: A Rasterization
Perspective [58.71769343511168]
We introduce a newization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios.
We also propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiableization to preciseized outputs and then performs geometry-aware supervision on HD maps.
arXiv Detail & Related papers (2023-06-18T08:51:14Z) - HDMapNet: An Online HD Map Construction and Evaluation Framework [23.19001503634617]
HD map construction is a crucial problem for autonomous driving.
Traditional HD maps are coupled with centimeter-level accurate localization which is unreliable in many scenarios.
Online map learning is a more scalable way to provide semantic and geometry priors to self-driving vehicles.
arXiv Detail & Related papers (2021-07-13T18:06:46Z) - Radar-based Automotive Localization using Landmarks in a Multimodal
Sensor Graph-based Approach [0.0]
In this paper, we address the problem of localization with automotive-grade radars.
The system uses landmarks and odometry information as an abstraction layer.
A single, semantic landmark map is used and maintained for all sensors.
arXiv Detail & Related papers (2021-04-29T07:35:20Z) - MP3: A Unified Model to Map, Perceive, Predict and Plan [84.07678019017644]
MP3 is an end-to-end approach to mapless driving where the input is raw sensor data and a high-level command.
We show that our approach is significantly safer, more comfortable, and can follow commands better than the baselines in challenging long-term closed-loop simulations.
arXiv Detail & Related papers (2021-01-18T00:09:30Z) - What is the Best Grid-Map for Self-Driving Cars Localization? An
Evaluation under Diverse Types of Illumination, Traffic, and Environment [10.64191129882262]
localization of self-driving cars is needed for several tasks such as keeping maps updated, tracking objects, and planning.
Since maintaining and using several maps is computationally expensive, it is important to analyze which type of map is more adequate for each application.
In this work, we provide data for such analysis by comparing the accuracy of a particle filter localization when using occupancy, reflectivity, color, or semantic grid maps.
arXiv Detail & Related papers (2020-09-19T22:02:44Z) - Persistent Map Saving for Visual Localization for Autonomous Vehicles:
An ORB-SLAM Extension [0.0]
We make use of a stereo camera sensor in order to perceive the environment and create the map.
We evaluate the localization accuracy for scenes of the KITTI dataset against the built up SLAM map.
We show that the relative translation error of the localization stays under 1% for a vehicle travelling at an average longitudinal speed of 36 m/s.
arXiv Detail & Related papers (2020-05-15T09:20:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.