One Thousand and One Hours: Self-driving Motion Prediction Dataset
- URL: http://arxiv.org/abs/2006.14480v2
- Date: Mon, 16 Nov 2020 21:16:49 GMT
- Title: One Thousand and One Hours: Self-driving Motion Prediction Dataset
- Authors: John Houston, Guido Zuidhof, Luca Bergamini, Yawei Ye, Long Chen,
Ashesh Jain, Sammy Omari, Vladimir Iglovikov, Peter Ondruska
- Abstract summary: We present the largest self-driving dataset for motion prediction to date, containing over 1,000 hours of data.
This was collected by a fleet of 20 autonomous vehicles along a fixed route in Palo Alto, California, over a four-month period.
It consists of 170,000 scenes, where each scene is 25 seconds long and captures the perception output of the self-driving system.
- Score: 8.675886928486335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by the impact of large-scale datasets on ML systems we present the
largest self-driving dataset for motion prediction to date, containing over
1,000 hours of data. This was collected by a fleet of 20 autonomous vehicles
along a fixed route in Palo Alto, California, over a four-month period. It
consists of 170,000 scenes, where each scene is 25 seconds long and captures
the perception output of the self-driving system, which encodes the precise
positions and motions of nearby vehicles, cyclists, and pedestrians over time.
On top of this, the dataset contains a high-definition semantic map with 15,242
labelled elements and a high-definition aerial view over the area. We show that
using a dataset of this size dramatically improves performance for key
self-driving problems. Combined with the provided software kit, this collection
forms the largest and most detailed dataset to date for the development of
self-driving machine learning tasks, such as motion forecasting, motion
planning and simulation. The full dataset is available at
http://level5.lyft.com/.
Related papers
- DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - Large Scale Real-World Multi-Person Tracking [68.27438015329807]
This paper presents a new large scale multi-person tracking dataset -- textttPersonPath22.
It is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20.
arXiv Detail & Related papers (2022-11-03T23:03:13Z) - PandaSet: Advanced Sensor Suite Dataset for Autonomous Driving [7.331883729089782]
PandaSet is the first dataset produced by a complete, high-precision autonomous vehicle sensor kit with a no-cost commercial license.
The dataset contains more than 100 scenes, each of which is 8 seconds long, and provides 28 types of labels for object classification and 37 types of labels for semantic segmentation.
arXiv Detail & Related papers (2021-12-23T14:52:12Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - Large Scale Interactive Motion Forecasting for Autonomous Driving : The
Waymo Open Motion Dataset [84.3946567650148]
With over 100,000 scenes, each 20 seconds long at 10 Hz, our new dataset contains more than 570 hours of unique data over 1750 km of roadways.
We use a high-accuracy 3D auto-labeling system to generate high quality 3D bounding boxes for each road agent.
We introduce a new set of metrics that provides a comprehensive evaluation of both single agent and joint agent interaction motion forecasting models.
arXiv Detail & Related papers (2021-04-20T17:19:05Z) - The NEOLIX Open Dataset for Autonomous Driving [1.4091801425319965]
We present the NEOLIX dataset and its applica-tions in the autonomous driving area.
Our dataset includes about 30,000 frames with point cloud la-bels, and more than 600k 3D bounding boxes withannotations.
arXiv Detail & Related papers (2020-11-27T02:27:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.