Learning to Automatically Catch Potholes in Worldwide Road Scene Images
- URL: http://arxiv.org/abs/2105.07986v2
- Date: Tue, 18 May 2021 07:15:56 GMT
- Title: Learning to Automatically Catch Potholes in Worldwide Road Scene Images
- Authors: J. Javier Yebes, David Montero, Ignacio Arriola
- Abstract summary: Research work tackled the challenge of pothole detection from images of real world road scenes.
We built a large dataset of images with pothole annotations.
Then, we fine-tuned four different object detection models based on Faster R-CNN and SSD deep neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Among several road hazards that are present in any paved way in the world,
potholes are one of the most annoying and also involving higher maintenance
costs. There exists an increasing interest on the automated detection of these
hazards enabled by technological and research progress. Our research work
tackled the challenge of pothole detection from images of real world road
scenes. The main novelty resides on the application of the latest progress in
AI to learn the visual appearance of potholes. We built a large dataset of
images with pothole annotations. They contained road scenes from different
cities in the world, taken with different cameras, vehicles and viewpoints
under varied environmental conditions. Then, we fine-tuned four different
object detection models based on Faster R-CNN and SSD deep neural networks. We
achieved high average precision and the pothole detector was tested on the
Nvidia DrivePX2 platform with GPGPU capability, which can be embedded on
vehicles. Moreover, it was deployed on a real vehicle to notify the detected
potholes to a given IoT platform as part of AUTOPILOT H2020 project.
Related papers
- VegaEdge: Edge AI Confluence Anomaly Detection for Real-Time Highway
IoT-Applications [2.812395851874055]
Vehicle anomaly detection plays a vital role in highway safety applications such as accident prevention, rapid response, traffic flow optimization, and work zone safety.
We introduce a lightweight approach to vehicle anomaly detection by utilizing the power of trajectory prediction.
We present VegaEdge - a sophisticated AI confluence designed for real-time security and surveillance applications in modern highway settings.
arXiv Detail & Related papers (2023-11-14T03:19:55Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - RoadScan: A Novel and Robust Transfer Learning Framework for Autonomous
Pothole Detection in Roads [0.0]
This research paper presents a novel approach to pothole detection using Deep Learning and Image Processing techniques.
The system aims to address the critical issue of potholes on roads, which pose significant risks to road users.
arXiv Detail & Related papers (2023-08-07T10:47:08Z) - Drone navigation and license place detection for vehicle location in
indoor spaces [55.66423065924684]
This work is aimed at creating a solution based on a nano-drone that navigates across rows of parked vehicles and detects their license plates.
All computations are done in real-time on the drone, which just sends position and detected images that allow the creation of a 2D map.
arXiv Detail & Related papers (2023-07-19T17:46:55Z) - Surround-view Fisheye Camera Perception for Automated Driving: Overview,
Survey and Challenges [1.4452405977630436]
Four fisheye cameras on four sides of the vehicle are sufficient to cover 360deg around the vehicle capturing the entire near-field region.
Some primary use cases are automated parking, traffic jam assist, and urban driving.
Due to the large radial distortion of fisheye cameras, standard algorithms can not be extended easily to the surround-view use case.
arXiv Detail & Related papers (2022-05-26T11:38:04Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - Vehicle trajectory prediction works, but not everywhere [75.36961426916639]
We present a novel method that automatically generates realistic scenes that cause state-of-the-art models go off-road.
We promote a simple yet effective generative model based on atomic scene generation functions along with physical constraints.
arXiv Detail & Related papers (2021-12-07T18:59:15Z) - Real-Time Pothole Detection Using Deep Learning [0.0]
This study deployed and tested on different deep learning architecture to detect potholes.
The images used for training were collected by cellphone mounted on the windshield of the car.
The system was able to detect potholes from a range on 100 meters away from the camera.
arXiv Detail & Related papers (2021-07-13T19:36:34Z) - iCurb: Imitation Learning-based Detection of Road Curbs using Aerial
Images for Autonomous Driving [11.576868193291997]
Road curbs are an essential capability for autonomous driving.
Usually, road curbs are detected on-line using vehicle-mounted sensors, such as video cameras and 3-D Lidars.
We propose a novel solution to detect road curbs off-line using aerial images.
arXiv Detail & Related papers (2021-03-31T14:40:31Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.