Zenseact Open Dataset: A large-scale and diverse multimodal dataset for
autonomous driving
- URL: http://arxiv.org/abs/2305.02008v2
- Date: Sat, 21 Oct 2023 08:55:52 GMT
- Title: Zenseact Open Dataset: A large-scale and diverse multimodal dataset for
autonomous driving
- Authors: Mina Alibeigi, William Ljungbergh, Adam Tonderski, Georg Hess, Adam
Lilja, Carl Lindstrom, Daria Motorniuk, Junsheng Fu, Jenny Widahl, and
Christoffer Petersson
- Abstract summary: Zenseact Open dataset (ZOD) is a large-scale and diverse dataset collected over two years in various European countries.
ZOD boasts the highest range and resolution sensors among comparable datasets.
The dataset is composed of Frames, Sequences, and Drives, designed to encompass both data diversity and support for multimodal-temporal learning.
- Score: 3.549770828382121
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing datasets for autonomous driving (AD) often lack diversity and
long-range capabilities, focusing instead on 360{\deg} perception and temporal
reasoning. To address this gap, we introduce Zenseact Open Dataset (ZOD), a
large-scale and diverse multimodal dataset collected over two years in various
European countries, covering an area 9x that of existing datasets. ZOD boasts
the highest range and resolution sensors among comparable datasets, coupled
with detailed keyframe annotations for 2D and 3D objects (up to 245m), road
instance/semantic segmentation, traffic sign recognition, and road
classification. We believe that this unique combination will facilitate
breakthroughs in long-range perception and multi-task learning. The dataset is
composed of Frames, Sequences, and Drives, designed to encompass both data
diversity and support for spatio-temporal learning, sensor fusion,
localization, and mapping. Frames consist of 100k curated camera images with
two seconds of other supporting sensor data, while the 1473 Sequences and 29
Drives include the entire sensor suite for 20 seconds and a few minutes,
respectively. ZOD is the only large-scale AD dataset released under a
permissive license, allowing for both research and commercial use. More
information, and an extensive devkit, can be found at https://zod.zenseact.com
Related papers
- RoboSense: Large-scale Dataset and Benchmark for Multi-sensor Low-speed Autonomous Driving [62.5830455357187]
In this paper, we construct a multimodal data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)
A large-scale multi-sensor dataset is built, named RoboSense, to facilitate near-field scene understanding.
RoboSense contains more than 133K synchronized data with 1.4M 3D bounding box and IDs in the full $360circ$ view, forming 216K trajectories across 7.6K temporal sequences.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - SUPS: A Simulated Underground Parking Scenario Dataset for Autonomous
Driving [41.221988979184665]
SUPS is a simulated dataset for underground automatic parking.
It supports multiple tasks with multiple sensors and multiple semantic labels aligned with successive images.
We also evaluate the state-of-the-art SLAM algorithms and perception models on our dataset.
arXiv Detail & Related papers (2023-02-25T02:59:12Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception [0.0]
This dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain.
We trained unimodal and multimodal baseline models for 3D object detection.
arXiv Detail & Related papers (2022-11-17T10:19:59Z) - PandaSet: Advanced Sensor Suite Dataset for Autonomous Driving [7.331883729089782]
PandaSet is the first dataset produced by a complete, high-precision autonomous vehicle sensor kit with a no-cost commercial license.
The dataset contains more than 100 scenes, each of which is 8 seconds long, and provides 28 types of labels for object classification and 37 types of labels for semantic segmentation.
arXiv Detail & Related papers (2021-12-23T14:52:12Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - PixSet : An Opportunity for 3D Computer Vision to Go Beyond Point Clouds
With a Full-Waveform LiDAR Dataset [0.11726720776908521]
Leddar PixSet is a new publicly available dataset (dataset.leddartech.com) for autonomous driving research and development.
The PixSet dataset contains approximately 29k frames from 97 sequences recorded in high-density urban areas.
arXiv Detail & Related papers (2021-02-24T01:13:17Z) - TJU-DHD: A Diverse High-Resolution Dataset for Object Detection [48.94731638729273]
Large-scale, rich-diversity, and high-resolution datasets play an important role in developing better object detection methods.
We build a diverse high-resolution dataset (called TJU-DHD)
The dataset contains 115,354 high-resolution images and 709,330 labeled objects with a large variance in scale and appearance.
arXiv Detail & Related papers (2020-11-18T09:32:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.