Weather and Light Level Classification for Autonomous Driving: Dataset,
Baseline and Active Learning
- URL: http://arxiv.org/abs/2104.14042v1
- Date: Wed, 28 Apr 2021 22:53:10 GMT
- Title: Weather and Light Level Classification for Autonomous Driving: Dataset,
Baseline and Active Learning
- Authors: Mahesh M Dhananjaya, Varun Ravi Kumar and Senthil Yogamani
- Abstract summary: We build a new dataset for weather (fog, rain, and snow) classification and light level (bright, moderate, and low) classification.
Each image has three labels corresponding to weather, light level, and street type.
We implement an active learning framework to reduce the dataset's redundancy and find the optimal set of frames for training a model.
- Score: 0.6445605125467573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving is rapidly advancing, and Level 2 functions are becoming a
standard feature. One of the foremost outstanding hurdles is to obtain robust
visual perception in harsh weather and low light conditions where accuracy
degradation is severe. It is critical to have a weather classification model to
decrease visual perception confidence during these scenarios. Thus, we have
built a new dataset for weather (fog, rain, and snow) classification and light
level (bright, moderate, and low) classification. Furthermore, we provide
street type (asphalt, grass, and cobblestone) classification, leading to 9
labels. Each image has three labels corresponding to weather, light level, and
street type. We recorded the data utilizing an industrial front camera of RCCC
(red/clear) format with a resolution of $1024\times1084$. We collected 15k
video sequences and sampled 60k images. We implement an active learning
framework to reduce the dataset's redundancy and find the optimal set of frames
for training a model. We distilled the 60k images further to 1.1k images, which
will be shared publicly after privacy anonymization. There is no public dataset
for weather and light level classification focused on autonomous driving to the
best of our knowledge. The baseline ResNet18 network used for weather
classification achieves state-of-the-art results in two non-automotive weather
classification public datasets but significantly lower accuracy on our proposed
dataset, demonstrating it is not saturated and needs further research.
Related papers
- LoLI-Street: Benchmarking Low-Light Image Enhancement and Beyond [37.47964043913622]
We introduce a new dataset LoLI-Street (Low-Light Images of Streets) with 33k paired low-light and well-exposed images from street scenes in developed cities.
LoLI-Street dataset also features 1,000 real low-light test images for testing LLIE models under real-life conditions.
arXiv Detail & Related papers (2024-10-13T13:11:56Z) - AllWeatherNet:Unified Image enhancement for autonomous driving under adverse weather and lowlight-conditions [24.36482818960804]
We propose a method to improve the visual quality and clarity degraded by adverse conditions.
Our method, AllWeather-Net, utilizes a novel hierarchical architecture to enhance images across all adverse conditions.
We show our model's generalization ability by applying it to unseen domains without re-training, achieving up to 3.9% mIoU improvement.
arXiv Detail & Related papers (2024-09-03T16:47:01Z) - Enhancing Autonomous Vehicle Perception in Adverse Weather through Image Augmentation during Semantic Segmentation Training [0.0]
We trained encoder-decoder UNet models to perform semantic segmentation augmentations.
Models trained on weather data have significantly lower losses than those trained on augmented data in all conditions except for clear days.
arXiv Detail & Related papers (2024-08-14T00:08:28Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Learning Real-World Image De-Weathering with Imperfect Supervision [57.748585821252824]
Existing real-world de-weathering datasets often exhibit inconsistent illumination, position, and textures between the ground-truth images and the input degraded images.
We develop a Consistent Label Constructor (CLC) to generate a pseudo-label as consistent as possible with the input degraded image.
We combine the original imperfect labels and pseudo-labels to jointly supervise the de-weathering model by the proposed Information Allocation Strategy.
arXiv Detail & Related papers (2023-10-23T14:02:57Z) - Counting Crowds in Bad Weather [68.50690406143173]
We propose a method for robust crowd counting in adverse weather scenarios.
Our model learns effective features and adaptive queries to account for large appearance variations.
Experimental results show that the proposed algorithm is effective in counting crowds under different weather types on benchmark datasets.
arXiv Detail & Related papers (2023-06-02T00:00:09Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - NoiSER: Noise is All You Need for Enhancing Low-Light Images Without
Task-Related Data [103.04999391668753]
We show that it is possible to enhance a low-light image without any task-related training data.
Technically, we propose a new, magical, effective and efficient method, termed underlineNoise underlineSElf-underlineRegression (NoiSER)
Our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results.
arXiv Detail & Related papers (2022-11-09T06:18:18Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - DAWN: Vehicle Detection in Adverse Weather Nature Dataset [4.09920839425892]
We present a new dataset consisting of real-world images collected under various adverse weather conditions called DAWN.
The dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms.
This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.
arXiv Detail & Related papers (2020-08-12T15:48:49Z) - CloudCast: A Satellite-Based Dataset and Baseline for Forecasting Clouds [0.0]
In this paper, we present a novel satellite-based dataset called CloudCast''.
It consists of 70,080 images with 10 different cloud types for multiple layers of the atmosphere annotated on a pixel level.
The spatial resolution of the dataset is 928 x 1530 pixels (3x3 km per pixel) with 15-min intervals between frames for the period 2017-01-01 to 2018-12-31.
arXiv Detail & Related papers (2020-07-15T20:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.