Design of Efficient Deep Learning models for Determining Road Surface
Condition from Roadside Camera Images and Weather Data
- URL: http://arxiv.org/abs/2009.10282v1
- Date: Tue, 22 Sep 2020 02:30:32 GMT
- Title: Design of Efficient Deep Learning models for Determining Road Surface
Condition from Roadside Camera Images and Weather Data
- Authors: Juan Carrillo, Mark Crowley, Guangyuan Pan, Liping Fu
- Abstract summary: Road maintenance during the Winter season is a safety critical and resource demanding operation.
One of its key activities is determining road surface condition (RSC) in order to prioritize roads and allocate cleaning efforts such as plowing or salting.
Two conventional approaches for determining RSC are: visual examination of roadside camera images by trained personnel and patrolling the roads to perform on-site inspections.
- Score: 2.8904578737516764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Road maintenance during the Winter season is a safety critical and resource
demanding operation. One of its key activities is determining road surface
condition (RSC) in order to prioritize roads and allocate cleaning efforts such
as plowing or salting. Two conventional approaches for determining RSC are:
visual examination of roadside camera images by trained personnel and
patrolling the roads to perform on-site inspections. However, with more than
500 cameras collecting images across Ontario, visual examination becomes a
resource-intensive activity, difficult to scale especially during periods of
snowstorms. This paper presents the results of a study focused on improving the
efficiency of road maintenance operations. We use multiple Deep Learning models
to automatically determine RSC from roadside camera images and weather
variables, extending previous research where similar methods have been used to
deal with the problem. The dataset we use was collected during the 2017-2018
Winter season from 40 stations connected to the Ontario Road Weather
Information System (RWIS), it includes 14.000 labeled images and 70.000 weather
measurements. We train and evaluate the performance of seven state-of-the-art
models from the Computer Vision literature, including the recent DenseNet,
NASNet, and MobileNet. Moreover, by following systematic ablation experiments
we adapt previously published Deep Learning models and reduce their number of
parameters to about ~1.3% compared to their original parameter count, and by
integrating observations from weather variables the models are able to better
ascertain RSC under poor visibility conditions.
Related papers
- Improving classification of road surface conditions via road area extraction and contrastive learning [2.9109581496560044]
We introduce a segmentation model to only focus the downstream classification model to the road surface in the image.
Our experiments on the public RTK dataset demonstrate a significant improvement in our proposed method.
arXiv Detail & Related papers (2024-07-19T15:43:16Z) - Road Surface Friction Estimation for Winter Conditions Utilising General Visual Features [0.4972323953932129]
This paper explores computer vision-based evaluation of road surface friction from roadside cameras.
We propose a hybrid deep learning architecture, WCamNet, consisting of a pretrained visual transformer model and convolutional blocks.
arXiv Detail & Related papers (2024-04-25T12:46:23Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Deep-Learning-Based Precipitation Nowcasting with Ground Weather Station
Data and Radar Data [14.672132394870445]
We propose ASOC, a novel attentive method for effectively exploiting ground-based meteorological observations from multiple weather stations.
ASOC is designed to capture temporal dynamics of the observations and also contextual relationships between them.
We show that such a combination improves the average critical success index (CSI) of predicting heavy (at least 10 mm/hr) and light (at least 1 mm/hr) rainfall events at 1-6 hr lead times by 5.7%.
arXiv Detail & Related papers (2022-10-20T14:59:58Z) - Unsupervised Restoration of Weather-affected Images using Deep Gaussian
Process-based CycleGAN [92.15895515035795]
We describe an approach for supervising deep networks that are based on CycleGAN.
We introduce new losses for training CycleGAN that lead to more effective training, resulting in high-quality reconstructions.
We demonstrate that the proposed method can be effectively applied to different restoration tasks like de-raining, de-hazing and de-snowing.
arXiv Detail & Related papers (2022-04-23T01:30:47Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Multimodal End-to-End Learning for Autonomous Steering in Adverse Road
and Weather Conditions [0.0]
We extend the previous work on end-to-end learning for autonomous steering to operate in adverse real-life conditions with multimodal data.
We collected 28 hours of driving data in several road and weather conditions and trained convolutional neural networks to predict the car steering wheel angle.
arXiv Detail & Related papers (2020-10-28T12:38:41Z) - Integration of Roadside Camera Images and Weather Data for Monitoring
Winter Road Surface Conditions [2.6955785230358966]
In winter, real-time monitoring of road surface conditions is critical for the safety of drivers and road maintenance operations.
Previous research has evaluated the potential of image classification methods for detecting road snow coverage by processing images from roadside cameras installed in RWIS (Road Weather Information System) stations.
There are a limited number of RWIS stations across Ontario, Canada; therefore, the network has reduced spatial coverage.
We suggest improving performance on this task through the integration of images and weather data collected from the RWIS stations with images from other MTO (Ministry of Transportation of Ontario) roadside cameras and weather data from Environment Canada stations.
arXiv Detail & Related papers (2020-09-22T01:43:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.