Online Pole Segmentation on Range Images for Long-term LiDAR
Localization in Urban Environments
- URL: http://arxiv.org/abs/2208.07364v1
- Date: Mon, 15 Aug 2022 17:58:08 GMT
- Title: Online Pole Segmentation on Range Images for Long-term LiDAR
Localization in Urban Environments
- Authors: Hao Dong, Xieyuanli Chen, Simo S\"arkk\"a, Cyrill Stachniss
- Abstract summary: We present a novel, accurate, and fast pole extraction approach based on geometric features that runs online.
Our method performs all computations directly on range images generated from 3D LiDAR scans.
We use the extracted poles as pseudo labels to train a deep neural network for online range image-based pole segmentation.
- Score: 32.34672033386747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust and accurate localization is a basic requirement for mobile autonomous
systems. Pole-like objects, such as traffic signs, poles, and lamps are
frequently used landmarks for localization in urban environments due to their
local distinctiveness and long-term stability. In this paper, we present a
novel, accurate, and fast pole extraction approach based on geometric features
that runs online and has little computational demands. Our method performs all
computations directly on range images generated from 3D LiDAR scans, which
avoids processing 3D point clouds explicitly and enables fast pole extraction
for each scan. We further use the extracted poles as pseudo labels to train a
deep neural network for online range image-based pole segmentation. We test
both our geometric and learning-based pole extraction methods for localization
on different datasets with different LiDAR scanners, routes, and seasonal
changes. The experimental results show that our methods outperform other
state-of-the-art approaches. Moreover, boosted with pseudo pole labels
extracted from multiple datasets, our learning-based method can run across
different datasets and achieve even better localization results compared to our
geometry-based method. We released our pole datasets to the public for
evaluating the performance of pole extractors, as well as the implementation of
our approach.
Related papers
- The Oxford Spires Dataset: Benchmarking Large-Scale LiDAR-Visual Localisation, Reconstruction and Radiance Field Methods [10.265865092323041]
This paper introduces a large-scale multi-modal dataset captured in and around well-known landmarks in Oxford.
We also establish benchmarks for tasks involving localisation, reconstruction, and novel-view synthesis.
Our dataset and benchmarks are intended to facilitate better integration of radiance field methods and SLAM systems.
arXiv Detail & Related papers (2024-11-15T19:43:24Z) - Map-aided annotation for pole base detection [0.0]
In this paper, a 2D HD map is used to automatically annotate pole-like features in images.
In the absence of height information, the map features are represented as pole bases at the ground level.
We show how an object detector can be trained to detect a pole base.
arXiv Detail & Related papers (2024-03-04T09:23:11Z) - CLIP-Guided Source-Free Object Detection in Aerial Images [17.26407623526735]
High-resolution aerial images often require substantial storage space and may not be readily accessible to the public.
We propose a novel Source-Free Object Detection (SFOD) method to address these challenges.
To alleviate the noisy labels in self-training, we utilize Contrastive Language-Image Pre-training (CLIP) to guide the generation of pseudo-labels.
By leveraging CLIP's zero-shot classification capability, we aggregate its scores with the original predicted bounding boxes, enabling us to obtain refined scores for the pseudo-labels.
arXiv Detail & Related papers (2024-01-10T14:03:05Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - SeqOT: A Spatial-Temporal Transformer Network for Place Recognition
Using Sequential LiDAR Data [9.32516766412743]
We propose a transformer-based network named SeqOT to exploit the temporal and spatial information provided by sequential range images.
We evaluate our approach on four datasets collected with different types of LiDAR sensors in different environments.
Our method operates online faster than the frame rate of the sensor.
arXiv Detail & Related papers (2022-09-16T14:08:11Z) - AutoGeoLabel: Automated Label Generation for Geospatial Machine Learning [69.47585818994959]
We evaluate a big data processing pipeline to auto-generate labels for remote sensing data.
We utilize the big geo-data platform IBM PAIRS to dynamically generate such labels in dense urban areas.
arXiv Detail & Related papers (2022-01-31T20:02:22Z) - GANav: Group-wise Attention Network for Classifying Navigable Regions in
Unstructured Outdoor Environments [54.21959527308051]
We present a new learning-based method for identifying safe and navigable regions in off-road terrains and unstructured environments from RGB images.
Our approach consists of classifying groups of terrain classes based on their navigability levels using coarse-grained semantic segmentation.
We show through extensive evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy of visual perception in off-road terrains for navigation.
arXiv Detail & Related papers (2021-03-07T02:16:24Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - Robust Image Retrieval-based Visual Localization using Kapture [10.249293519246478]
We present a versatile pipeline for visual localization that facilitates the use of different local and global features.
We evaluate our methods on eight public datasets where they rank top on all and first on many of them.
To foster future research, we release code, models, and all datasets used in this paper in the kapture format open source under a permissive BSD license.
arXiv Detail & Related papers (2020-07-27T21:10:35Z) - Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds [76.52448276587707]
We propose Reconfigurable Voxels, a new approach to constructing representations from 3D point clouds.
Specifically, we devise a biased random walk scheme, which adaptively covers each neighborhood with a fixed number of voxels.
We find that this approach effectively improves the stability of voxel features, especially for sparse regions.
arXiv Detail & Related papers (2020-04-06T15:07:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.