Canadian Adverse Driving Conditions Dataset
- URL: http://arxiv.org/abs/2001.10117v3
- Date: Thu, 27 Feb 2020 17:23:40 GMT
- Title: Canadian Adverse Driving Conditions Dataset
- Authors: Matthew Pitropov, Danson Garcia, Jason Rebello, Michael Smart, Carlos
Wang, Krzysztof Czarnecki, Steven Waslander
- Abstract summary: The Canadian Adverse Driving Conditions dataset was collected with the Autonomoose autonomous vehicle platform.
The dataset is the first autonomous vehicle dataset that focuses on adverse driving conditions specifically.
It contains 7,000 frames collected through a variety of winter weather conditions of annotated data from 8 cameras.
- Score: 8.428999369859318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Canadian Adverse Driving Conditions (CADC) dataset was collected with the
Autonomoose autonomous vehicle platform, based on a modified Lincoln MKZ. The
dataset, collected during winter within the Region of Waterloo, Canada, is the
first autonomous vehicle dataset that focuses on adverse driving conditions
specifically. It contains 7,000 frames collected through a variety of winter
weather conditions of annotated data from 8 cameras (Ximea MQ013CG-E2), Lidar
(VLP-32C) and a GNSS+INS system (Novatel OEM638). The sensors are time
synchronized and calibrated with the intrinsic and extrinsic calibrations
included in the dataset. Lidar frame annotations that represent ground truth
for 3D object detection and tracking have been provided by Scale AI.
Related papers
- Adver-City: Open-Source Multi-Modal Dataset for Collaborative Perception Under Adverse Weather Conditions [1.4963011898406866]
Adverse weather conditions pose a significant challenge to the widespread adoption of Autonomous Vehicles.
We introduce Adver-City, the first open-source synthetic Collaborative Perception dataset focused on adverse weather conditions.
It contains over 24 thousand frames, over 890 thousand annotations, and 110 unique scenarios across six different weather conditions.
arXiv Detail & Related papers (2024-10-08T21:26:22Z) - SCOPE: A Synthetic Multi-Modal Dataset for Collective Perception Including Physical-Correct Weather Conditions [0.5026434955540995]
SCOPE is the first synthetic multi-modal dataset that incorporates realistic camera and LiDAR models as well as parameterized and physically accurate weather simulations.
The dataset contains 17,600 frames from over 40 diverse scenarios with up to 24 collaborative agents, infrastructure sensors, and passive traffic, including cyclists and pedestrians.
arXiv Detail & Related papers (2024-08-06T09:35:50Z) - SemanticSpray++: A Multimodal Dataset for Autonomous Driving in Wet Surface Conditions [10.306226508237348]
The SemanticSpray++ dataset provides labels for camera, LiDAR, and radar data of highway-like scenarios in wet surface conditions.
By labeling all three sensor modalities, the dataset offers a comprehensive test bed for analyzing the performance of different perception methods.
arXiv Detail & Related papers (2024-06-14T11:46:48Z) - WEDGE: A multi-weather autonomous driving dataset built from generative
vision-language models [51.61662672912017]
We introduce WEDGE: a synthetic dataset generated with a vision-language generative model via prompting.
WEDGE consists of 3360 images in 16 extreme weather conditions manually annotated with 16513 bounding boxes.
We establish baseline performance for classification and detection with 53.87% test accuracy and 45.41 mAP.
arXiv Detail & Related papers (2023-05-12T14:42:47Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - LiDAR Snowfall Simulation for Robust 3D Object Detection [116.10039516404743]
We propose a physically based method to simulate the effect of snowfall on real clear-weather LiDAR point clouds.
Our method samples snow particles in 2D space for each LiDAR line and uses the induced geometry to modify the measurement for each LiDAR beam.
We use our simulation to generate partially synthetic snowy LiDAR data and leverage these data for training 3D object detection models that are robust to snowfall.
arXiv Detail & Related papers (2022-03-28T21:48:26Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous
Driving [48.588254700810474]
We present a novel dataset covering seasonal and challenging perceptual conditions for autonomous driving.
Among others, it enables research on visual odometry, global place recognition, and map-based re-localization tracking.
arXiv Detail & Related papers (2020-09-14T12:31:20Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z) - Real-time Kinematic Ground Truth for the Oxford RobotCar Dataset [23.75606166843614]
We release reference data towards a challenging long-term localisation and mapping benchmark based on the large-scale Oxford RobotCar dataset.
We have produced a globally-consistent centimetre-accurate ground truth for the entire year-long duration of the dataset.
arXiv Detail & Related papers (2020-02-24T10:34:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.