Visual Localization for Autonomous Driving: Mapping the Accurate
Location in the City Maze
- URL: http://arxiv.org/abs/2008.05678v3
- Date: Tue, 20 Oct 2020 01:19:44 GMT
- Title: Visual Localization for Autonomous Driving: Mapping the Accurate
Location in the City Maze
- Authors: Dongfang Liu, Yiming Cui, Xiaolei Guo, Wei Ding, Baijian Yang, and
Yingjie Chen
- Abstract summary: We propose a novel feature voting technique for visual localization.
In our work, we craft the proposed feature voting method into three state-of-the-art visual localization networks.
Our approach can predict location robustly even in challenging inner-city settings.
- Score: 16.824901952766446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate localization is a foundational capacity, required for autonomous
vehicles to accomplish other tasks such as navigation or path planning. It is a
common practice for vehicles to use GPS to acquire location information.
However, the application of GPS can result in severe challenges when vehicles
run within the inner city where different kinds of structures may shadow the
GPS signal and lead to inaccurate location results. To address the localization
challenges of urban settings, we propose a novel feature voting technique for
visual localization. Different from the conventional front-view-based method,
our approach employs views from three directions (front, left, and right) and
thus significantly improves the robustness of location prediction. In our work,
we craft the proposed feature voting method into three state-of-the-art visual
localization networks and modify their architectures properly so that they can
be applied for vehicular operation. Extensive field test results indicate that
our approach can predict location robustly even in challenging inner-city
settings. Our research sheds light on using the visual localization approach to
help autonomous vehicles to find accurate location information in a city maze,
within a desirable time constraint.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - MapLocNet: Coarse-to-Fine Feature Registration for Visual Re-Localization in Navigation Maps [8.373285397029884]
Traditional localization approaches rely on high-definition (HD) maps, which consist of precisely annotated landmarks.
We propose a novel transformer-based neural re-localization method, inspired by image registration.
Our method significantly outperforms the current state-of-the-art OrienterNet on both the nuScenes and Argoverse datasets.
arXiv Detail & Related papers (2024-07-11T14:51:18Z) - Accurate Cooperative Localization Utilizing LiDAR-equipped Roadside Infrastructure for Autonomous Driving [2.0499240875882]
LiDAR now facilitates vehicle localization with centimeter-level accuracy.
These high-precision techniques often face reliability challenges in environments devoid of identifiable map features.
We propose a novel approach that utilizes road side units (RSU) with vehicle-to-infrastructure (V2I) communications to assist vehicle self-localization.
arXiv Detail & Related papers (2024-07-11T10:44:42Z) - Monocular Localization with Semantics Map for Autonomous Vehicles [8.242967098897408]
We propose a novel visual semantic localization algorithm that employs stable semantic features instead of low-level texture features.
First, semantic maps are constructed offline by detecting semantic objects, such as ground markers, lane lines, and poles, using cameras or LiDAR sensors.
Online visual localization is performed through data association of semantic features and map objects.
arXiv Detail & Related papers (2024-06-06T08:12:38Z) - A Survey on Visual Map Localization Using LiDARs and Cameras [0.0]
We define visual map localization as a two-stage process.
At the stage of place recognition, the initial position of the vehicle in the map is determined by comparing the visual sensor output with a set of geo-tagged map regions of interest.
At the stage of map metric localization, the vehicle is tracked while it moves across the map by continuously aligning the visual sensors' output with the current area of the map that is being traversed.
arXiv Detail & Related papers (2022-08-05T20:11:18Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - Localization of Autonomous Vehicles: Proof of Concept for A Computer
Vision Approach [0.0]
This paper introduces a visual-based localization method for autonomous vehicles (AVs) that operate in the absence of any complicated hardware system but a single camera.
The proposed system is tested on the KITTI dataset and has shown an average accuracy of 2 meters in finding the final location of the vehicle.
arXiv Detail & Related papers (2021-04-06T21:09:47Z) - Deep Multi-Task Learning for Joint Localization, Perception, and
Prediction [68.50217234419922]
This paper investigates the issues that arise in state-of-the-art autonomy stacks under localization error.
We design a system that jointly performs perception, prediction, and localization.
Our architecture is able to reuse computation between both tasks, and is thus able to correct localization errors efficiently.
arXiv Detail & Related papers (2021-01-17T17:20:31Z) - Real-time Localization Using Radio Maps [59.17191114000146]
We present a simple yet effective method for localization based on pathloss.
In our approach, the user to be localized reports the received signal strength from a set of base stations with known locations.
arXiv Detail & Related papers (2020-06-09T16:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.