DAWN: Vehicle Detection in Adverse Weather Nature Dataset
- URL: http://arxiv.org/abs/2008.05402v1
- Date: Wed, 12 Aug 2020 15:48:49 GMT
- Title: DAWN: Vehicle Detection in Adverse Weather Nature Dataset
- Authors: Mourad A. Kenk, Mahmoud Hassaballah
- Abstract summary: We present a new dataset consisting of real-world images collected under various adverse weather conditions called DAWN.
The dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms.
This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.
- Score: 4.09920839425892
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, self-driving vehicles have been introduced with several automated
features including lane-keep assistance, queuing assistance in traffic-jam,
parking assistance and crash avoidance. These self-driving vehicles and
intelligent visual traffic surveillance systems mainly depend on cameras and
sensors fusion systems. Adverse weather conditions such as heavy fog, rain,
snow, and sandstorms are considered dangerous restrictions of the functionality
of cameras impacting seriously the performance of adopted computer vision
algorithms for scene understanding (i.e., vehicle detection, tracking, and
recognition in traffic scenes). For example, reflection coming from rain flow
and ice over roads could cause massive detection errors which will affect the
performance of intelligent visual traffic systems. Additionally, scene
understanding and vehicle detection algorithms are mostly evaluated using
datasets contain certain types of synthetic images plus a few real-world
images. Thus, it is uncertain how these algorithms would perform on unclear
images acquired in the wild and how the progress of these algorithms is
standardized in the field. To this end, we present a new dataset (benchmark)
consisting of real-world images collected under various adverse weather
conditions called DAWN. This dataset emphasizes a diverse traffic environment
(urban, highway and freeway) as well as a rich variety of traffic flow. The
DAWN dataset comprises a collection of 1000 images from real-traffic
environments, which are divided into four sets of weather conditions: fog,
snow, rain and sandstorms. The dataset is annotated with object bounding boxes
for autonomous driving and video surveillance scenarios. This data helps
interpreting effects caused by the adverse weather conditions on the
performance of vehicle detection systems.
Related papers
- NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - RainSD: Rain Style Diversification Module for Image Synthesis
Enhancement using Feature-Level Style Distribution [5.500457283114346]
This paper presents a synthetic road dataset with sensor blockage generated from real road dataset BDD100K.
Using this dataset, the degradation of diverse multi-task networks for autonomous driving has been thoroughly evaluated and analyzed.
The tendency of the performance degradation of deep neural network-based perception systems for autonomous vehicle has been analyzed in depth.
arXiv Detail & Related papers (2023-12-31T11:30:42Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - METEOR: A Massive Dense & Heterogeneous Behavior Dataset for Autonomous
Driving [42.69638782267657]
We present a new and complex traffic dataset, METEOR, which captures traffic patterns in unstructured scenarios in India.
METEOR consists of more than 1000 one-minute video clips, over 2 million annotated frames with ego-vehicle trajectories, and more than 13 million bounding boxes for surrounding vehicles or traffic agents.
We use our novel dataset to evaluate the performance of object detection and behavior prediction algorithms.
arXiv Detail & Related papers (2021-09-16T01:01:55Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z) - Artificial Intelligence Enabled Traffic Monitoring System [3.085453921856008]
This article presents a novel approach to automatically monitor real time traffic footage using deep convolutional neural networks.
The proposed system deploys several state-of-the-art deep learning algorithms to automate different traffic monitoring needs.
arXiv Detail & Related papers (2020-10-02T22:28:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.