Characterization of Multiple 3D LiDARs for Localization and Mapping
using Normal Distributions Transform
- URL: http://arxiv.org/abs/2004.01374v1
- Date: Fri, 3 Apr 2020 05:05:36 GMT
- Title: Characterization of Multiple 3D LiDARs for Localization and Mapping
using Normal Distributions Transform
- Authors: Alexander Carballo, Abraham Monrroy, David Wong, Patiphon Narksri,
Jacob Lambert, Yuki Kitsukawa, Eijiro Takeuchi, Shinpei Kato, and Kazuya
Takeda
- Abstract summary: We present a detailed comparison of ten different 3D LiDAR sensors, covering a range of manufacturers, models, and laser configurations, for the tasks of mapping and vehicle localization.
Data used in this study is a subset of our LiDAR Benchmarking and Reference (LIBRE) dataset, captured independently from each sensor, from a vehicle driven on public urban roads multiple times, at different times of the day.
We analyze the performance and characteristics of each LiDAR for the tasks of (1) 3D mapping including an assessment map quality based on mean map entropy, and (2) 6-DOF localization using a ground truth reference map.
- Score: 54.46473014276162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present a detailed comparison of ten different 3D LiDAR
sensors, covering a range of manufacturers, models, and laser configurations,
for the tasks of mapping and vehicle localization, using as common reference
the Normal Distributions Transform (NDT) algorithm implemented in the
self-driving open source platform Autoware. LiDAR data used in this study is a
subset of our LiDAR Benchmarking and Reference (LIBRE) dataset, captured
independently from each sensor, from a vehicle driven on public urban roads
multiple times, at different times of the day. In this study, we analyze the
performance and characteristics of each LiDAR for the tasks of (1) 3D mapping
including an assessment map quality based on mean map entropy, and (2) 6-DOF
localization using a ground truth reference map.
Related papers
- The Oxford Spires Dataset: Benchmarking Large-Scale LiDAR-Visual Localisation, Reconstruction and Radiance Field Methods [10.265865092323041]
This paper introduces a large-scale multi-modal dataset captured in and around well-known landmarks in Oxford.
We also establish benchmarks for tasks involving localisation, reconstruction, and novel-view synthesis.
Our dataset and benchmarks are intended to facilitate better integration of radiance field methods and SLAM systems.
arXiv Detail & Related papers (2024-11-15T19:43:24Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - ParisLuco3D: A high-quality target dataset for domain generalization of LiDAR perception [4.268591926288843]
This paper provides a novel dataset, ParisLuco3D, specifically designed for cross-domain evaluation.
Online benchmarks for LiDAR semantic segmentation, LiDAR object detection, and LiDAR tracking are provided.
arXiv Detail & Related papers (2023-10-25T10:45:38Z) - RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps [8.625083692154414]
We propose RaLF, a novel deep neural network-based approach for localizing radar scans in a LiDAR map of the environment.
RaLF is composed of radar and LiDAR feature encoders, a place recognition head that generates global descriptors, and a metric localization head that predicts the 3-DoF transformation between the radar scan and the map.
We extensively evaluate our approach on multiple real-world driving datasets and show that RaLF achieves state-of-the-art performance for both place recognition and metric localization.
arXiv Detail & Related papers (2023-09-18T15:37:01Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D
Object Detection [36.77084564823707]
deep learning methods heavily rely on annotated data and often face domain generalization issues.
LiDAR-CS dataset is the first dataset that addresses the sensor-related gaps in the domain of 3D object detection in real traffic.
arXiv Detail & Related papers (2023-01-29T19:10:35Z) - SeqOT: A Spatial-Temporal Transformer Network for Place Recognition
Using Sequential LiDAR Data [9.32516766412743]
We propose a transformer-based network named SeqOT to exploit the temporal and spatial information provided by sequential range images.
We evaluate our approach on four datasets collected with different types of LiDAR sensors in different environments.
Our method operates online faster than the frame rate of the sensor.
arXiv Detail & Related papers (2022-09-16T14:08:11Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.