Robust Monocular Localization in Sparse HD Maps Leveraging Multi-Task
Uncertainty Estimation
- URL: http://arxiv.org/abs/2110.10563v1
- Date: Wed, 20 Oct 2021 13:46:15 GMT
- Title: Robust Monocular Localization in Sparse HD Maps Leveraging Multi-Task
Uncertainty Estimation
- Authors: K\"ursat Petek, Kshitij Sirohi, Daniel B\"uscher, Wolfram Burgard
- Abstract summary: We present a novel monocular localization approach based on a sliding-window pose graph.
We propose an efficient multi-task uncertainty-aware perception module.
Our approach enables robust and accurate 6D localization in challenging urban scenarios.
- Score: 28.35592701148056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust localization in dense urban scenarios using a low-cost sensor setup
and sparse HD maps is highly relevant for the current advances in autonomous
driving, but remains a challenging topic in research. We present a novel
monocular localization approach based on a sliding-window pose graph that
leverages predicted uncertainties for increased precision and robustness
against challenging scenarios and per frame failures. To this end, we propose
an efficient multi-task uncertainty-aware perception module, which covers
semantic segmentation, as well as bounding box detection, to enable the
localization of vehicles in sparse maps, containing only lane borders and
traffic lights. Further, we design differentiable cost maps that are directly
generated from the estimated uncertainties. This opens up the possibility to
minimize the reprojection loss of amorphous map elements in an association free
and uncertainty-aware manner. Extensive evaluation on the Lyft 5 dataset shows
that, despite the sparsity of the map, our approach enables robust and accurate
6D localization in challenging urban scenarios
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Real-Time Stochastic Terrain Mapping and Processing for Autonomous Safe Landing [0.0]
This paper develops a novel real-time planetary terrain mapping algorithm.
It accounts for topographic uncertainty between the sampled points, or the uncertainty due to sparse 3D measurements.
arXiv Detail & Related papers (2024-09-14T05:12:14Z) - MapLocNet: Coarse-to-Fine Feature Registration for Visual Re-Localization in Navigation Maps [8.373285397029884]
Traditional localization approaches rely on high-definition (HD) maps, which consist of precisely annotated landmarks.
We propose a novel transformer-based neural re-localization method, inspired by image registration.
Our method significantly outperforms the current state-of-the-art OrienterNet on both the nuScenes and Argoverse datasets.
arXiv Detail & Related papers (2024-07-11T14:51:18Z) - Monocular Localization with Semantics Map for Autonomous Vehicles [8.242967098897408]
We propose a novel visual semantic localization algorithm that employs stable semantic features instead of low-level texture features.
First, semantic maps are constructed offline by detecting semantic objects, such as ground markers, lane lines, and poles, using cameras or LiDAR sensors.
Online visual localization is performed through data association of semantic features and map objects.
arXiv Detail & Related papers (2024-06-06T08:12:38Z) - UFO: Uncertainty-aware LiDAR-image Fusion for Off-road Semantic Terrain
Map Estimation [2.048226951354646]
This paper presents a learning-based fusion method for generating dense terrain classification maps in BEV.
Our approach enhances the accuracy of semantic maps generated from an RGB image and a single-sweep LiDAR scan.
arXiv Detail & Related papers (2024-03-05T04:20:03Z) - EVORA: Deep Evidential Traversability Learning for Risk-Aware Off-Road Autonomy [34.19779754333234]
This work proposes a unified framework to learn uncertainty-aware traction model and plan risk-aware trajectories.
We parameterize Dirichlet distributions with the network outputs and propose a novel uncertainty-aware squared Earth Mover's distance loss.
Our approach is extensively validated in simulation and on wheeled and quadruped robots.
arXiv Detail & Related papers (2023-11-10T18:49:53Z) - Online Map Vectorization for Autonomous Driving: A Rasterization
Perspective [58.71769343511168]
We introduce a newization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios.
We also propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiableization to preciseized outputs and then performs geometry-aware supervision on HD maps.
arXiv Detail & Related papers (2023-06-18T08:51:14Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Deep Multi-Task Learning for Joint Localization, Perception, and
Prediction [68.50217234419922]
This paper investigates the issues that arise in state-of-the-art autonomy stacks under localization error.
We design a system that jointly performs perception, prediction, and localization.
Our architecture is able to reuse computation between both tasks, and is thus able to correct localization errors efficiently.
arXiv Detail & Related papers (2021-01-17T17:20:31Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.