YOLOv8-Based Visual Detection of Road Hazards: Potholes, Sewer Covers,
and Manholes
- URL: http://arxiv.org/abs/2311.00073v1
- Date: Tue, 31 Oct 2023 18:33:26 GMT
- Title: YOLOv8-Based Visual Detection of Road Hazards: Potholes, Sewer Covers,
and Manholes
- Authors: Om M. Khare, Shubham Gandhi, Aditya M. Rahalkar, Sunil Mane
- Abstract summary: This research paper provides a comprehensive evaluation of YOLOv8, an object detection model, in the context of detecting road hazards.
A comparative analysis with previous iterations, YOLOv5 and YOLOv7, is conducted, emphasizing the importance of computational efficiency in various applications.
The research assesses the robustness and generalization capabilities of the models through mAP scores calculated across the diverse test scenarios.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Effective detection of road hazards plays a pivotal role in road
infrastructure maintenance and ensuring road safety. This research paper
provides a comprehensive evaluation of YOLOv8, an object detection model, in
the context of detecting road hazards such as potholes, Sewer Covers, and Man
Holes. A comparative analysis with previous iterations, YOLOv5 and YOLOv7, is
conducted, emphasizing the importance of computational efficiency in various
applications. The paper delves into the architecture of YOLOv8 and explores
image preprocessing techniques aimed at enhancing detection accuracy across
diverse conditions, including variations in lighting, road types, hazard sizes,
and types. Furthermore, hyperparameter tuning experiments are performed to
optimize model performance through adjustments in learning rates, batch sizes,
anchor box sizes, and augmentation strategies. Model evaluation is based on
Mean Average Precision (mAP), a widely accepted metric for object detection
performance. The research assesses the robustness and generalization
capabilities of the models through mAP scores calculated across the diverse
test scenarios, underlining the significance of YOLOv8 in road hazard detection
and infrastructure maintenance.
Related papers
- P-YOLOv8: Efficient and Accurate Real-Time Detection of Distracted Driving [0.0]
Distracted driving is a critical safety issue that leads to numerous fatalities and injuries worldwide.
This study addresses the need for efficient and real-time machine learning models to detect distracted driving behaviors.
A real-time object detection system is introduced, optimized for both speed and accuracy.
arXiv Detail & Related papers (2024-10-21T02:56:44Z) - Cutting-Edge Detection of Fatigue in Drivers: A Comparative Study of Object Detection Models [0.0]
This research delves into the development of a fatigue detection system based on modern object detection algorithms, including YOLOv5, YOLOv6, YOLOv7, and YOLOv8.
By comparing the performance of these models, we evaluate their effectiveness in real-time detection of fatigue-related behavior in drivers.
The study addresses challenges like environmental variability and detection accuracy and suggests a roadmap for enhancing real-time detection.
arXiv Detail & Related papers (2024-10-19T08:06:43Z) - Optimizing YOLO Architectures for Optimal Road Damage Detection and Classification: A Comparative Study from YOLOv7 to YOLOv10 [0.0]
This paper presents a comprehensive workflow for road damage detection using deep learning models.
To accommodate hardware limitations, large images are cropped, and lightweight models are utilized.
The proposed approach employs multiple model architectures, including a custom YOLOv7 model with Coordinate Attention layers and a Tiny YOLOv7 model.
arXiv Detail & Related papers (2024-10-10T22:55:12Z) - YOLO9tr: A Lightweight Model for Pavement Damage Detection Utilizing a Generalized Efficient Layer Aggregation Network and Attention Mechanism [0.0]
This paper proposes YOLO9tr, a novel lightweight object detection model for pavement damage detection.
YOLO9tr is based on the YOLOv9 architecture, incorporating a partial attention block that enhances feature extraction and attention mechanisms.
The model achieves a high frame rate of up to 136 FPS, making it suitable for real-time applications such as video surveillance and automated inspection systems.
arXiv Detail & Related papers (2024-06-17T06:31:43Z) - Diffusion-Based Particle-DETR for BEV Perception [94.88305708174796]
Bird-Eye-View (BEV) is one of the most widely-used scene representations for visual perception in Autonomous Vehicles (AVs)
Recent diffusion-based methods offer a promising approach to uncertainty modeling for visual perception but fail to effectively detect small objects in the large coverage of the BEV.
Here, we address this problem by combining the diffusion paradigm with current state-of-the-art 3D object detectors in BEV.
arXiv Detail & Related papers (2023-12-18T09:52:14Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Road Rutting Detection using Deep Learning on Images [0.0]
Road rutting is a severe road distress that can cause premature failure of road incurring early and costly maintenance costs.
This paper proposes a novel road rutting dataset comprising of 949 images and provides both object level and pixel level annotations.
Object detection models and semantic segmentation models were deployed to detect road rutting on the proposed dataset.
arXiv Detail & Related papers (2022-09-28T16:53:05Z) - Divide-and-Conquer for Lane-Aware Diverse Trajectory Prediction [71.97877759413272]
Trajectory prediction is a safety-critical tool for autonomous vehicles to plan and execute actions.
Recent methods have achieved strong performances using Multi-Choice Learning objectives like winner-takes-all (WTA) or best-of-many.
Our work addresses two key challenges in trajectory prediction, learning outputs, and better predictions by imposing constraints using driving knowledge.
arXiv Detail & Related papers (2021-04-16T17:58:56Z) - Automotive Radar Interference Mitigation with Unfolded Robust PCA based
on Residual Overcomplete Auto-Encoder Blocks [88.46770122522697]
In autonomous driving, radar systems play an important role in detecting targets such as other vehicles on the road.
Deep learning methods for automotive radar interference mitigation can succesfully estimate the amplitude of targets, but fail to recover the phase of the respective targets.
We propose an efficient and effective technique that is able to estimate both amplitude and phase in the presence of interference.
arXiv Detail & Related papers (2020-10-14T09:41:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.