SID: Stereo Image Dataset for Autonomous Driving in Adverse Conditions
- URL: http://arxiv.org/abs/2407.04908v1
- Date: Sat, 6 Jul 2024 00:58:31 GMT
- Title: SID: Stereo Image Dataset for Autonomous Driving in Adverse Conditions
- Authors: Zaid A. El-Shair, Abdalmalek Abu-raddaha, Aaron Cofield, Hisham Alawneh, Mohamed Aladem, Yazan Hamzeh, Samir A. Rawashdeh,
- Abstract summary: We introduce the Stereo Image dataset (SID), a large-scale stereo-image dataset that captures a wide spectrum of challenging real-world environmental scenarios.
The dataset includes sequence-level annotations for weather conditions, time of day, location, and road conditions, along with instances of camera lens soiling.
These algorithms support consistent and reliable operation across variable weather and lighting conditions, even when handling challenging situations like lens soiling.
- Score: 1.0805335573008565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust perception is critical for autonomous driving, especially under adverse weather and lighting conditions that commonly occur in real-world environments. In this paper, we introduce the Stereo Image Dataset (SID), a large-scale stereo-image dataset that captures a wide spectrum of challenging real-world environmental scenarios. Recorded at a rate of 20 Hz using a ZED stereo camera mounted on a vehicle, SID consists of 27 sequences totaling over 178k stereo image pairs that showcase conditions from clear skies to heavy snow, captured during the day, dusk, and night. The dataset includes detailed sequence-level annotations for weather conditions, time of day, location, and road conditions, along with instances of camera lens soiling, offering a realistic representation of the challenges in autonomous navigation. Our work aims to address a notable gap in research for autonomous driving systems by presenting high-fidelity stereo images essential for the development and testing of advanced perception algorithms. These algorithms support consistent and reliable operation across variable weather and lighting conditions, even when handling challenging situations like lens soiling. SID is publicly available at: https://doi.org/10.7302/esz6-nv83.
Related papers
- PLT-D3: A High-fidelity Dynamic Driving Simulation Dataset for Stereo Depth and Scene Flow [0.0]
This paper introduces Dynamic-weather Driving dataset; a high-fidelity stereo depth and scene flow ground truth data generated using Engine 5.
In particular, this dataset includes synchronized high-resolution stereo image sequences that replicate a wide array of dynamic weather scenarios.
Benchmarks have been established for several critical autonomous driving tasks using Unreal-D3 to measure and enhance the performance of state-of-the-art models.
arXiv Detail & Related papers (2024-06-11T19:21:46Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - LiDAR-as-Camera for End-to-End Driving [1.0323063834827415]
Ouster LiDARs can output surround-view LiDAR-images with depth, intensity, and ambient radiation channels.
These measurements originate from the same sensor, rendering them perfectly aligned in time and space.
We demonstrate that such LiDAR-images are sufficient for the real-car road-following task and perform at least equally to camera-based models in the tested conditions.
In the second direction of study, we reveal that the temporal smoothness of off-policy prediction sequences correlates equally well with actual on-policy driving ability as the commonly used mean absolute error.
arXiv Detail & Related papers (2022-06-30T10:06:49Z) - SHIFT: A Synthetic Driving Dataset for Continuous Multi-Task Domain
Adaptation [152.60469768559878]
SHIFT is the largest multi-task synthetic dataset for autonomous driving.
It presents discrete and continuous shifts in cloudiness, rain and fog intensity, time of day, and vehicle and pedestrian density.
Our dataset and benchmark toolkit are publicly available at www.vis.xyz/shift.
arXiv Detail & Related papers (2022-06-16T17:59:52Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - DSEC: A Stereo Event Camera Dataset for Driving Scenarios [55.79329250951028]
This work presents the first high-resolution, large-scale stereo dataset with event cameras.
The dataset contains 53 sequences collected by driving in a variety of illumination conditions.
It provides ground truth disparity for the development and evaluation of event-based stereo algorithms.
arXiv Detail & Related papers (2021-03-10T12:10:33Z) - DAWN: Vehicle Detection in Adverse Weather Nature Dataset [4.09920839425892]
We present a new dataset consisting of real-world images collected under various adverse weather conditions called DAWN.
The dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms.
This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.
arXiv Detail & Related papers (2020-08-12T15:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.