PandaSet: Advanced Sensor Suite Dataset for Autonomous Driving
- URL: http://arxiv.org/abs/2112.12610v1
- Date: Thu, 23 Dec 2021 14:52:12 GMT
- Title: PandaSet: Advanced Sensor Suite Dataset for Autonomous Driving
- Authors: Pengchuan Xiao, Zhenlei Shao, Steven Hao, Zishuo Zhang, Xiaolin Chai,
Judy Jiao, Zesong Li, Jian Wu, Kai Sun, Kun Jiang, Yunlong Wang, Diange Yang
- Abstract summary: PandaSet is the first dataset produced by a complete, high-precision autonomous vehicle sensor kit with a no-cost commercial license.
The dataset contains more than 100 scenes, each of which is 8 seconds long, and provides 28 types of labels for object classification and 37 types of labels for semantic segmentation.
- Score: 7.331883729089782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The accelerating development of autonomous driving technology has placed
greater demands on obtaining large amounts of high-quality data.
Representative, labeled, real world data serves as the fuel for training deep
learning networks, critical for improving self-driving perception algorithms.
In this paper, we introduce PandaSet, the first dataset produced by a complete,
high-precision autonomous vehicle sensor kit with a no-cost commercial license.
The dataset was collected using one 360{\deg} mechanical spinning LiDAR, one
forward-facing, long-range LiDAR, and 6 cameras. The dataset contains more than
100 scenes, each of which is 8 seconds long, and provides 28 types of labels
for object classification and 37 types of labels for semantic segmentation. We
provide baselines for LiDAR-only 3D object detection, LiDAR-camera fusion 3D
object detection and LiDAR point cloud segmentation. For more details about
PandaSet and the development kit, see https://scale.com/open-datasets/pandaset.
Related papers
- RoboSense: Large-scale Dataset and Benchmark for Multi-sensor Low-speed Autonomous Driving [62.5830455357187]
In this paper, we construct a multimodal data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)
A large-scale multi-sensor dataset is built, named RoboSense, to facilitate near-field scene understanding.
RoboSense contains more than 133K synchronized data with 1.4M 3D bounding box and IDs in the full $360circ$ view, forming 216K trajectories across 7.6K temporal sequences.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - Zenseact Open Dataset: A large-scale and diverse multimodal dataset for
autonomous driving [3.549770828382121]
Zenseact Open dataset (ZOD) is a large-scale and diverse dataset collected over two years in various European countries.
ZOD boasts the highest range and resolution sensors among comparable datasets.
The dataset is composed of Frames, Sequences, and Drives, designed to encompass both data diversity and support for multimodal-temporal learning.
arXiv Detail & Related papers (2023-05-03T09:59:18Z) - SUPS: A Simulated Underground Parking Scenario Dataset for Autonomous
Driving [41.221988979184665]
SUPS is a simulated dataset for underground automatic parking.
It supports multiple tasks with multiple sensors and multiple semantic labels aligned with successive images.
We also evaluate the state-of-the-art SLAM algorithms and perception models on our dataset.
arXiv Detail & Related papers (2023-02-25T02:59:12Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud
Pre-training in Autonomous Driving Scenarios [51.285561119993105]
We present BEV-MAE, an efficient masked autoencoder pre-training framework for LiDAR-based 3D object detection in autonomous driving.
Specifically, we propose a bird's eye view (BEV) guided masking strategy to guide the 3D encoder learning feature representation.
We introduce a learnable point token to maintain a consistent receptive field size of the 3D encoder.
arXiv Detail & Related papers (2022-12-12T08:15:03Z) - aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception [0.0]
This dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain.
We trained unimodal and multimodal baseline models for 3D object detection.
arXiv Detail & Related papers (2022-11-17T10:19:59Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - PixSet : An Opportunity for 3D Computer Vision to Go Beyond Point Clouds
With a Full-Waveform LiDAR Dataset [0.11726720776908521]
Leddar PixSet is a new publicly available dataset (dataset.leddartech.com) for autonomous driving research and development.
The PixSet dataset contains approximately 29k frames from 97 sequences recorded in high-density urban areas.
arXiv Detail & Related papers (2021-02-24T01:13:17Z) - Cirrus: A Long-range Bi-pattern LiDAR Dataset [35.87501129332217]
We introduce Cirrus, a new long-range bi-pattern LiDAR public dataset for autonomous driving tasks.
Our platform is equipped with a high-resolution video camera and a pair of LiDAR sensors with a 250-meter effective range.
In Cirrus, eight categories of objects are exhaustively annotated in the LiDAR point clouds for the entire effective range.
arXiv Detail & Related papers (2020-12-05T03:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.