A Computer Vision-assisted Approach to Automated Real-Time Road
Infrastructure Management
- URL: http://arxiv.org/abs/2202.13285v1
- Date: Sun, 27 Feb 2022 04:08:00 GMT
- Title: A Computer Vision-assisted Approach to Automated Real-Time Road
Infrastructure Management
- Authors: Philippe Heitzmann
- Abstract summary: We propose a supervised object detection approach to detect and classify road distresses in real-time via a vehicle dashboard-mounted smartphone camera.
Our results rank in the top 5 of 121 teams that entered the IEEE's 2020 Global Road Damage Detection ("GRDC") Challenge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate automated detection of road pavement distresses is critical for the
timely identification and repair of potentially accident-inducing road hazards
such as potholes and other surface-level asphalt cracks. Deployment of such a
system would be further advantageous in low-resource environments where lack of
government funding for infrastructure maintenance typically entails heightened
risks of potentially fatal vehicular road accidents as a result of inadequate
and infrequent manual inspection of road systems for road hazards. To remedy
this, a recent research initiative organized by the Institute of Electrical and
Electronics Engineers ("IEEE") as part of their 2020 Global Road Damage
Detection ("GRDC") Challenge published in May 2020 a novel 21,041 annotated
image dataset of various road distresses calling upon academic and other
researchers to submit innovative deep learning-based solutions to these road
hazard detection problems. Making use of this dataset, we propose a supervised
object detection approach leveraging You Only Look Once ("YOLO") and the Faster
R-CNN frameworks to detect and classify road distresses in real-time via a
vehicle dashboard-mounted smartphone camera, producing 0.68 F1-score
experimental results ranking in the top 5 of 121 teams that entered this
challenge as of December 2021.
Related papers
- The RoboDrive Challenge: Drive Anytime Anywhere in Any Condition [136.32656319458158]
The 2024 RoboDrive Challenge was crafted to propel the development of driving perception technologies.
This year's challenge consisted of five distinct tracks and attracted 140 registered teams from 93 institutes across 11 countries.
The competition culminated in 15 top-performing solutions.
arXiv Detail & Related papers (2024-05-14T17:59:57Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - HazardNet: Road Debris Detection by Augmentation of Synthetic Models [1.1750701213830141]
We present an algorithm to detect unseen road debris using a small set of synthetic models.
We constrain the problem domain to uncommon objects on the road and allow the deep neural network, HazardNet, to learn the semantic meaning of road debris.
arXiv Detail & Related papers (2023-03-14T00:30:24Z) - Road Damages Detection and Classification with YOLOv7 [0.0]
This work proposes to collect and label road damage data using Google Street View and use YOLOv7 (You Only Look Once version 7)
The proposed approaches are applied to the Crowdsensing-based Road Damage Detection Challenge (CRDDC2022), IEEE BigData 2022.
arXiv Detail & Related papers (2022-10-31T18:55:58Z) - RDD2022: A multi-national image dataset for automatic Road Damage
Detection [0.0]
The dataset comprises 47,420 road images from six countries, Japan, India, the Czech Republic, Norway, the United States, and China.
Four types of road damage, namely longitudinal cracks, transverse cracks, alligator cracks, and potholes, are captured in the dataset.
The dataset has been released as a part of the Crowd sensing-based Road Damage Detection Challenge (CRDDC2022)
arXiv Detail & Related papers (2022-09-18T11:29:49Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Road Damage Detection using Deep Ensemble Learning [36.24563211765782]
We present an ensemble model for efficient detection and classification of road damages.
Our solution utilizes a state-of-the-art object detector known as You Only Look Once (YOLO-v4)
It was able to achieve an F1 score of 0.628 on the test 1 dataset and 0.6358 on the test 2 dataset.
arXiv Detail & Related papers (2020-10-30T03:18:14Z) - FasterRCNN Monitoring of Road Damages: Competition and Deployment [19.95568306575998]
The IEEE 2020 global Road Damage Detection (RDD) Challenge is giving an opportunity for deep learning and computer vision researchers to get involved.
This paper proposes two contributions to that topic: In a first part, we detail our solution to the RDD Challenge.
In a second part, we present our efforts in deploying our model on a local road network, explaining the proposed methodology and encountered challenges.
arXiv Detail & Related papers (2020-10-22T14:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.