Ford Multi-AV Seasonal Dataset
- URL: http://arxiv.org/abs/2003.07969v1
- Date: Tue, 17 Mar 2020 22:33:38 GMT
- Title: Ford Multi-AV Seasonal Dataset
- Authors: Siddharth Agarwal, Ankit Vora, Gaurav Pandey, Wayne Williams, Helen
Kourous and James McBride
- Abstract summary: This paper presents a challenging multi-agent seasonal dataset collected by a fleet of Ford autonomous vehicles at different days and times during 2017-18.
The vehicles traversed an average route of 66 km in Michigan that included a mix of driving scenarios.
We present the seasonal variation in weather, lighting, construction and traffic conditions experienced in dynamic urban environments.
- Score: 1.988834651033683
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a challenging multi-agent seasonal dataset collected by a
fleet of Ford autonomous vehicles at different days and times during 2017-18.
The vehicles traversed an average route of 66 km in Michigan that included a
mix of driving scenarios such as the Detroit Airport, freeways, city-centers,
university campus and suburban neighbourhoods, etc. Each vehicle used in this
data collection is a Ford Fusion outfitted with an Applanix POS-LV GNSS system,
four HDL-32E Velodyne 3D-lidar scanners, 6 Point Grey 1.3 MP Cameras arranged
on the rooftop for 360-degree coverage and 1 Pointgrey 5 MP camera mounted
behind the windshield for the forward field of view. We present the seasonal
variation in weather, lighting, construction and traffic conditions experienced
in dynamic urban environments. This dataset can help design robust algorithms
for autonomous vehicles and multi-agent systems. Each log in the dataset is
time-stamped and contains raw data from all the sensors, calibration values,
pose trajectory, ground truth pose, and 3D maps. All data is available in
Rosbag format that can be visualized, modified and applied using the
open-source Robot Operating System (ROS). We also provide the output of
state-of-the-art reflectivity-based localization for bench-marking purposes.
The dataset can be freely downloaded at our website.
Related papers
- RoboSense: Large-scale Dataset and Benchmark for Multi-sensor Low-speed Autonomous Driving [62.5830455357187]
In this paper, we construct a multimodal data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)
A large-scale multi-sensor dataset is built, named RoboSense, to facilitate near-field scene understanding.
RoboSense contains more than 133K synchronized data with 1.4M 3D bounding box and IDs in the full $360circ$ view, forming 216K trajectories across 7.6K temporal sequences.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - MAN TruckScenes: A multimodal dataset for autonomous trucking in diverse conditions [0.6137109345454494]
We present MAN TruckScenes, the first multimodal dataset for autonomous trucking.
It comprises more than 740 scenes of 20 s each within a multitude of different environmental conditions.
Man TruckScenes is the first dataset to provide 4D radar data with 360deg coverage.
arXiv Detail & Related papers (2024-07-10T08:32:26Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception [0.0]
This dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain.
We trained unimodal and multimodal baseline models for 3D object detection.
arXiv Detail & Related papers (2022-11-17T10:19:59Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous
Driving [48.588254700810474]
We present a novel dataset covering seasonal and challenging perceptual conditions for autonomous driving.
Among others, it enables research on visual odometry, global place recognition, and map-based re-localization tracking.
arXiv Detail & Related papers (2020-09-14T12:31:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.