PixSet : An Opportunity for 3D Computer Vision to Go Beyond Point Clouds
With a Full-Waveform LiDAR Dataset
- URL: http://arxiv.org/abs/2102.12010v1
- Date: Wed, 24 Feb 2021 01:13:17 GMT
- Title: PixSet : An Opportunity for 3D Computer Vision to Go Beyond Point Clouds
With a Full-Waveform LiDAR Dataset
- Authors: Jean-Luc D\'eziel, Pierre Merriaux, Francis Tremblay, Dave Lessard,
Dominique Plourde, Julien Stanguennec, Pierre Goulet and Pierre Olivier
- Abstract summary: Leddar PixSet is a new publicly available dataset (dataset.leddartech.com) for autonomous driving research and development.
The PixSet dataset contains approximately 29k frames from 97 sequences recorded in high-density urban areas.
- Score: 0.11726720776908521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leddar PixSet is a new publicly available dataset (dataset.leddartech.com)
for autonomous driving research and development. One key novelty of this
dataset is the presence of full-waveform data from the Leddar Pixell sensor, a
solid-state flash LiDAR. Full-waveform data has been shown to improve the
performance of perception algorithms in airborne applications but is yet to be
demonstrated for terrestrial applications such as autonomous driving. The
PixSet dataset contains approximately 29k frames from 97 sequences recorded in
high-density urban areas, using a set of various sensors (cameras, LiDARs,
radar, IMU, etc.) Each frame has been manually annotated with 3D bounding
boxes.
Related papers
- RoboSense: Large-scale Dataset and Benchmark for Multi-sensor Low-speed Autonomous Driving [62.5830455357187]
In this paper, we construct a multimodal data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)
A large-scale multi-sensor dataset is built, named RoboSense, to facilitate near-field scene understanding.
RoboSense contains more than 133K synchronized data with 1.4M 3D bounding box and IDs in the full $360circ$ view, forming 216K trajectories across 7.6K temporal sequences.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - SemanticSpray++: A Multimodal Dataset for Autonomous Driving in Wet Surface Conditions [10.306226508237348]
The SemanticSpray++ dataset provides labels for camera, LiDAR, and radar data of highway-like scenarios in wet surface conditions.
By labeling all three sensor modalities, the dataset offers a comprehensive test bed for analyzing the performance of different perception methods.
arXiv Detail & Related papers (2024-06-14T11:46:48Z) - A9 Intersection Dataset: All You Need for Urban 3D Camera-LiDAR Roadside
Perception [20.10416681832639]
A9 Intersection dataset consists of labeled LiDAR point clouds and synchronized camera images.
Our dataset consists of 4.8k images and point clouds with more than 57.4k manually labeled 3D boxes.
arXiv Detail & Related papers (2023-06-15T16:39:51Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception [0.0]
This dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain.
We trained unimodal and multimodal baseline models for 3D object detection.
arXiv Detail & Related papers (2022-11-17T10:19:59Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - PandaSet: Advanced Sensor Suite Dataset for Autonomous Driving [7.331883729089782]
PandaSet is the first dataset produced by a complete, high-precision autonomous vehicle sensor kit with a no-cost commercial license.
The dataset contains more than 100 scenes, each of which is 8 seconds long, and provides 28 types of labels for object classification and 37 types of labels for semantic segmentation.
arXiv Detail & Related papers (2021-12-23T14:52:12Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - Cirrus: A Long-range Bi-pattern LiDAR Dataset [35.87501129332217]
We introduce Cirrus, a new long-range bi-pattern LiDAR public dataset for autonomous driving tasks.
Our platform is equipped with a high-resolution video camera and a pair of LiDAR sensors with a 250-meter effective range.
In Cirrus, eight categories of objects are exhaustively annotated in the LiDAR point clouds for the entire effective range.
arXiv Detail & Related papers (2020-12-05T03:18:31Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.