Pole-based Vehicle Localization with Vector Maps: A Camera-LiDAR Comparative Study
- URL: http://arxiv.org/abs/2412.09649v1
- Date: Wed, 11 Dec 2024 09:05:05 GMT
- Title: Pole-based Vehicle Localization with Vector Maps: A Camera-LiDAR Comparative Study
- Authors: Maxime Noizet, Philippe Xu, Philippe Bonnifait,
- Abstract summary: In road environments, many common furniture such as traffic signs, traffic lights and street lights take the form of poles.
This paper introduces a real-time method for camera-based pole detection using a lightweight neural network trained on automatically annotated images.
The results highlight the high accuracy of the vision-based approach in open road conditions.
- Score: 6.300346102366891
- License:
- Abstract: For autonomous navigation, accurate localization with respect to a map is needed. In urban environments, infrastructure such as buildings or bridges cause major difficulties to Global Navigation Satellite Systems (GNSS) and, despite advances in inertial navigation, it is necessary to support them with other sources of exteroceptive information. In road environments, many common furniture such as traffic signs, traffic lights and street lights take the form of poles. By georeferencing these features in vector maps, they can be used within a localization filter that includes a detection pipeline and a data association method. Poles, having discriminative vertical structures, can be extracted from 3D geometric information using LiDAR sensors. Alternatively, deep neural networks can be employed to detect them from monocular cameras. The lack of depth information induces challenges in associating camera detections with map features. Yet, multi-camera integration provides a cost-efficient solution. This paper quantitatively evaluates the efficacy of these approaches in terms of localization. It introduces a real-time method for camera-based pole detection using a lightweight neural network trained on automatically annotated images. The proposed methods' efficiency is assessed on a challenging sequence with a vector map. The results highlight the high accuracy of the vision-based approach in open road conditions.
Related papers
- Tightly-Coupled, Speed-aided Monocular Visual-Inertial Localization in Topological Map [0.7373617024876725]
This paper proposes a novel algorithm for vehicle speed-aided monocular visual-inertial localization using a topological map.
The proposed system aims to address the limitations of existing methods that rely heavily on expensive sensors like GPS and LiDAR.
arXiv Detail & Related papers (2024-11-08T11:55:27Z) - Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Map-aided annotation for pole base detection [0.0]
In this paper, a 2D HD map is used to automatically annotate pole-like features in images.
In the absence of height information, the map features are represented as pole bases at the ground level.
We show how an object detector can be trained to detect a pole base.
arXiv Detail & Related papers (2024-03-04T09:23:11Z) - Pixel to Elevation: Learning to Predict Elevation Maps at Long Range using Images for Autonomous Offroad Navigation [10.898724668444125]
We present a learning-based approach capable of predicting terrain elevation maps at long-range using only onboard egocentric images in real-time.
We experimentally validate the applicability of our proposed approach for autonomous offroad robotic navigation in complex and unstructured terrain.
arXiv Detail & Related papers (2024-01-30T22:37:24Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Language-Guided 3D Object Detection in Point Cloud for Autonomous
Driving [91.91552963872596]
We propose a new multi-modal visual grounding task, termed LiDAR Grounding.
It jointly learns the LiDAR-based object detector with the language features and predicts the targeted region directly from the detector.
Our work offers a deeper insight into the LiDAR-based grounding task and we expect it presents a promising direction for the autonomous driving community.
arXiv Detail & Related papers (2023-05-25T06:22:10Z) - Neural Implicit Dense Semantic SLAM [83.04331351572277]
We propose a novel RGBD vSLAM algorithm that learns a memory-efficient, dense 3D geometry, and semantic segmentation of an indoor scene in an online manner.
Our pipeline combines classical 3D vision-based tracking and loop closing with neural fields-based mapping.
Our proposed algorithm can greatly enhance scene perception and assist with a range of robot control problems.
arXiv Detail & Related papers (2023-04-27T23:03:52Z) - UNav: An Infrastructure-Independent Vision-Based Navigation System for
People with Blindness and Low vision [4.128685217530067]
We propose a vision-based localization pipeline for navigation support for end-users with blindness and low vision.
Given a query image taken by an end-user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database.
A customized user interface projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan.
arXiv Detail & Related papers (2022-09-22T22:21:37Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - Semantic Image Alignment for Vehicle Localization [111.59616433224662]
We present a novel approach to vehicle localization in dense semantic maps using semantic segmentation from a monocular camera.
In contrast to existing visual localization approaches, the system does not require additional keypoint features, handcrafted localization landmark extractors or expensive LiDAR sensors.
arXiv Detail & Related papers (2021-10-08T14:40:15Z) - Radar-based Automotive Localization using Landmarks in a Multimodal
Sensor Graph-based Approach [0.0]
In this paper, we address the problem of localization with automotive-grade radars.
The system uses landmarks and odometry information as an abstraction layer.
A single, semantic landmark map is used and maintained for all sensors.
arXiv Detail & Related papers (2021-04-29T07:35:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.