ParisLuco3D: A high-quality target dataset for domain generalization of LiDAR perception
- URL: http://arxiv.org/abs/2310.16542v3
- Date: Mon, 3 Jun 2024 18:08:16 GMT
- Title: ParisLuco3D: A high-quality target dataset for domain generalization of LiDAR perception
- Authors: Jules Sanchez, Louis Soum-Fontez, Jean-Emmanuel Deschaud, Francois Goulette,
- Abstract summary: This paper provides a novel dataset, ParisLuco3D, specifically designed for cross-domain evaluation.
Online benchmarks for LiDAR semantic segmentation, LiDAR object detection, and LiDAR tracking are provided.
- Score: 4.268591926288843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LiDAR is an essential sensor for autonomous driving by collecting precise geometric information regarding a scene. %Exploiting this information for perception is interesting as the amount of available data increases. As the performance of various LiDAR perception tasks has improved, generalizations to new environments and sensors has emerged to test these optimized models in real-world conditions. This paper provides a novel dataset, ParisLuco3D, specifically designed for cross-domain evaluation to make it easier to evaluate the performance utilizing various source datasets. Alongside the dataset, online benchmarks for LiDAR semantic segmentation, LiDAR object detection, and LiDAR tracking are provided to ensure a fair comparison across methods. The ParisLuco3D dataset, evaluation scripts, and links to benchmarks can be found at the following website:https://npm3d.fr/parisluco3d
Related papers
- TLD-READY: Traffic Light Detection -- Relevance Estimation and Deployment Analysis [9.458657306918859]
Effective traffic light detection is a critical component of the perception stack in autonomous vehicles.
This work introduces a novel deep-learning detection system while addressing the challenges of previous work.
We propose a relevance estimation system that innovatively uses directional arrow markings on the road, eliminating the need for prior map creation.
arXiv Detail & Related papers (2024-09-11T14:12:44Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Is Your LiDAR Placement Optimized for 3D Scene Understanding? [8.233185931617122]
prevailing driving datasets predominantly utilize single-LiDAR systems and collect data devoid of adverse conditions.
We propose Place3D, a full-cycle pipeline that encompasses LiDAR placement optimization, data generation, and downstream evaluations.
We showcase exceptional results in both LiDAR semantic segmentation and 3D object detection tasks, under diverse weather and sensor failure conditions.
arXiv Detail & Related papers (2024-03-25T17:59:58Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - Instant Domain Augmentation for LiDAR Semantic Segmentation [10.250046817380458]
This paper presents a fast and flexible LiDAR augmentation method for the semantic segmentation task, called 'LiDomAug'.
Our on-demand augmentation module runs at 330 FPS, so it can be seamlessly integrated into the data loader in the learning framework.
arXiv Detail & Related papers (2023-03-25T06:59:12Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D
Object Detection [36.77084564823707]
deep learning methods heavily rely on annotated data and often face domain generalization issues.
LiDAR-CS dataset is the first dataset that addresses the sensor-related gaps in the domain of 3D object detection in real traffic.
arXiv Detail & Related papers (2023-01-29T19:10:35Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Learning Moving-Object Tracking with FMCW LiDAR [53.05551269151209]
We propose a learning-based moving-object tracking method utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave (FMCW) LiDAR.
Given the labels, we propose a contrastive learning framework, which pulls together the features from the same instance in embedding space and pushes apart the features from different instances to improve the tracking quality.
arXiv Detail & Related papers (2022-03-02T09:11:36Z) - Improving Perception via Sensor Placement: Designing Multi-LiDAR Systems
for Autonomous Vehicles [16.45799795374353]
We propose an easy-to-compute information-theoretic surrogate cost metric based on Probabilistic Occupancy Grids (POG) to optimize LiDAR placement for maximal sensing.
Our results confirm that sensor placement is an important factor in 3D point cloud-based object detection and could lead to a variation of performance by 10% 20% on the state-of-the-art perception algorithms.
arXiv Detail & Related papers (2021-05-02T01:52:18Z) - Characterization of Multiple 3D LiDARs for Localization and Mapping
using Normal Distributions Transform [54.46473014276162]
We present a detailed comparison of ten different 3D LiDAR sensors, covering a range of manufacturers, models, and laser configurations, for the tasks of mapping and vehicle localization.
Data used in this study is a subset of our LiDAR Benchmarking and Reference (LIBRE) dataset, captured independently from each sensor, from a vehicle driven on public urban roads multiple times, at different times of the day.
We analyze the performance and characteristics of each LiDAR for the tasks of (1) 3D mapping including an assessment map quality based on mean map entropy, and (2) 6-DOF localization using a ground truth reference map.
arXiv Detail & Related papers (2020-04-03T05:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.