An Improved Deep Convolutional Neural Network-Based Autonomous Road
Inspection Scheme Using Unmanned Aerial Vehicles
- URL: http://arxiv.org/abs/2008.06189v1
- Date: Fri, 14 Aug 2020 04:35:10 GMT
- Title: An Improved Deep Convolutional Neural Network-Based Autonomous Road
Inspection Scheme Using Unmanned Aerial Vehicles
- Authors: Syed Ali Hassan, Tariq Rahim, Soo Young Shin
- Abstract summary: This work is an improved convolutional neural network (CNN) model and its implementation for the detection of road cracks, potholes, and yellow lane in the road.
The purpose of yellow lane detection and tracking is to realize autonomous navigation of unmanned aerial vehicle (UAV) by following yellow lane while detecting and reporting the road cracks and potholes to the server through WIFI or 5G medium.
- Score: 12.618653234201089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advancements in artificial intelligence (AI) gives a great opportunity to
develop an autonomous devices. The contribution of this work is an improved
convolutional neural network (CNN) model and its implementation for the
detection of road cracks, potholes, and yellow lane in the road. The purpose of
yellow lane detection and tracking is to realize autonomous navigation of
unmanned aerial vehicle (UAV) by following yellow lane while detecting and
reporting the road cracks and potholes to the server through WIFI or 5G medium.
The fabrication of own data set is a hectic and time-consuming task. The data
set is created, labeled and trained using default and an improved model. The
performance of both these models is benchmarked with respect to accuracy, mean
average precision (mAP) and detection time. In the testing phase, it was
observed that the performance of the improved model is better in respect of
accuracy and mAP. The improved model is implemented in UAV using the robot
operating system for the autonomous detection of potholes and cracks in roads
via UAV front camera vision in real-time.
Related papers
- Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation [18.236615392921273]
This paper addresses the challenge of road graph construction for autonomous vehicles.
We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles.
We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network.
arXiv Detail & Related papers (2024-08-03T02:57:37Z) - Guiding Attention in End-to-End Driving Models [49.762868784033785]
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving.
We study how to guide the attention of these models to improve their driving quality by adding a loss term during training.
In contrast to previous work, our method does not require these salient semantic maps to be available during testing time.
arXiv Detail & Related papers (2024-04-30T23:18:51Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Monocular Vision-based Prediction of Cut-in Maneuvers with LSTM Networks [0.0]
This study proposes a method to predict potentially dangerous cut-in maneuvers happening in the ego lane.
We follow a computer vision-based approach that only employs a single in-vehicle RGB camera.
Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step.
arXiv Detail & Related papers (2022-03-21T02:30:36Z) - Automatic Extraction of Road Networks from Satellite Images by using
Adaptive Structural Deep Belief Network [0.0]
Our model is applied to an automatic recognition method of road network system, called RoadTracer.
RoadTracer can generate a road map on the ground surface from aerial photograph data.
In order to improve the accuracy and the calculation time, our Adaptive DBN was implemented on the RoadTracer instead of the CNN.
arXiv Detail & Related papers (2021-10-25T07:06:10Z) - How to Build a Curb Dataset with LiDAR Data for Autonomous Driving [11.632427050596728]
Video cameras and 3D LiDARs are mounted on autonomous vehicles for curb detection.
Camera-based curb detection methods suffer from challenging illumination conditions.
A dataset with curb annotations or an efficient curb labeling approach, hence, is of high demand.
arXiv Detail & Related papers (2021-10-08T08:32:37Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - Traffic Lane Detection using FCN [0.0]
lane detection is a crucial technology that enables self-driving cars to properly position themselves in a multi-lane urban driving environments.
In this project, we designed an-volutional Decoder, Fully Convolutional Network for lane detection.
This model was applied to a real-world large scale dataset and achieved a level of accuracy that outperformed our baseline model.
arXiv Detail & Related papers (2020-04-19T22:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.