Advancing Roadway Sign Detection with YOLO Models and Transfer Learning
- URL: http://arxiv.org/abs/2406.09437v1
- Date: Tue, 11 Jun 2024 20:00:52 GMT
- Title: Advancing Roadway Sign Detection with YOLO Models and Transfer Learning
- Authors: Selvia Nafaa, Hafsa Essam, Karim Ashour, Doaa Emad, Rana Mohamed, Mohammed Elhenawy, Huthaifa I. Ashqar, Abdallah A. Hassan, Taqwa I. Alhadidi,
- Abstract summary: We modified YOLOv5 and YOLOv8 to detect and classify different roadway signs under different illumination conditions.
For the YOLOv8 model, varying the number of epochs and batch size yields consistent MAP50 scores, ranging from 94.6% to 97.1% on the testing set.
The YOLOv5 model demonstrates competitive performance, with MAP50 scores ranging from 92.4% to 96.9%.
- Score: 3.7078234026046877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Roadway signs detection and recognition is an essential element in the Advanced Driving Assistant Systems (ADAS). Several artificial intelligence methods have been used widely among of them YOLOv5 and YOLOv8. In this paper, we used a modified YOLOv5 and YOLOv8 to detect and classify different roadway signs under different illumination conditions. Experimental results indicated that for the YOLOv8 model, varying the number of epochs and batch size yields consistent MAP50 scores, ranging from 94.6% to 97.1% on the testing set. The YOLOv5 model demonstrates competitive performance, with MAP50 scores ranging from 92.4% to 96.9%. These results suggest that both models perform well across different training setups, with YOLOv8 generally achieving slightly higher MAP50 scores. These findings suggest that both models can perform well under different training setups, offering valuable insights for practitioners seeking reliable and adaptable solutions in object detection applications.
Related papers
- Deep Learning Models for UAV-Assisted Bridge Inspection: A YOLO Benchmark Analysis [0.41942958779358674]
We benchmark 23 models belonging to the four newest YOLO variants (YOLOv5, YOLOv6, YOLOv7, YOLOv8)
We identify YOLOv8n, YOLOv7tiny, YOLOv6m, and YOLOv6m as the models offering an optimal balance between accuracy and processing speed.
Our findings accelerate the model selection process for UAVs, enabling more efficient and reliable bridge inspections.
arXiv Detail & Related papers (2024-11-07T07:03:40Z) - Cutting-Edge Detection of Fatigue in Drivers: A Comparative Study of Object Detection Models [0.0]
This research delves into the development of a fatigue detection system based on modern object detection algorithms, including YOLOv5, YOLOv6, YOLOv7, and YOLOv8.
By comparing the performance of these models, we evaluate their effectiveness in real-time detection of fatigue-related behavior in drivers.
The study addresses challenges like environmental variability and detection accuracy and suggests a roadmap for enhancing real-time detection.
arXiv Detail & Related papers (2024-10-19T08:06:43Z) - Comparing YOLOv5 Variants for Vehicle Detection: A Performance Analysis [0.0]
This study provides a comparative analysis of five YOLOv5 variants, YOLOv5n6s, YOLOv5s6s, YOLOv5m6s, YOLOv5l6s, and YOLOv5x6s, for vehicle detection.
YOLOv5n6s demonstrated a strong balance between precision and recall, particularly in detecting Cars.
arXiv Detail & Related papers (2024-08-22T17:06:29Z) - YOLOv10: Real-Time End-to-End Object Detection [68.28699631793967]
YOLOs have emerged as the predominant paradigm in the field of real-time object detection.
The reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs.
We introduce the holistic efficiency-accuracy driven model design strategy for YOLOs.
arXiv Detail & Related papers (2024-05-23T11:44:29Z) - YOLO-World: Real-Time Open-Vocabulary Object Detection [87.08732047660058]
We introduce YOLO-World, an innovative approach that enhances YOLO with open-vocabulary detection capabilities.
Our method excels in detecting a wide range of objects in a zero-shot manner with high efficiency.
YOLO-World achieves 35.4 AP with 52.0 FPS on V100, which outperforms many state-of-the-art methods in terms of both accuracy and speed.
arXiv Detail & Related papers (2024-01-30T18:59:38Z) - Investigating YOLO Models Towards Outdoor Obstacle Detection For
Visually Impaired People [3.4628430044380973]
Seven different YOLO object detection models were implemented.
YOLOv8 was found to be the best model, which reached a precision of $80%$ and a recall of $68.2%$ on a well-known Obstacle dataset.
YOLO-NAS was found to be suboptimal for the obstacle detection task.
arXiv Detail & Related papers (2023-12-10T13:16:22Z) - YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time
Object Detection [80.11152626362109]
We provide an efficient and performant object detector, termed YOLO-MS.
We train our YOLO-MS on the MS COCO dataset from scratch without relying on any other large-scale datasets.
Our work can also be used as a plug-and-play module for other YOLO models.
arXiv Detail & Related papers (2023-08-10T10:12:27Z) - Performance Analysis of YOLO-based Architectures for Vehicle Detection
from Traffic Images in Bangladesh [0.0]
We find the best-suited YOLO architecture for fast and accurate vehicle detection from traffic images in Bangladesh.
Models were trained on a dataset containing 7390 images belonging to 21 types of vehicles.
We found the YOLOV5x variant to be the best-suited model, performing better than YOLOv3 and YOLOv5s models respectively by 7 & 4 percent in mAP, and 12 & 8.5 percent in terms of Accuracy.
arXiv Detail & Related papers (2022-12-18T18:53:35Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - Workshop on Autonomous Driving at CVPR 2021: Technical Report for
Streaming Perception Challenge [57.647371468876116]
We introduce our real-time 2D object detection system for the realistic autonomous driving scenario.
Our detector is built on a newly designed YOLO model, called YOLOX.
On the Argoverse-HD dataset, our system achieves 41.0 streaming AP, which surpassed second place by 7.8/6.1 on detection-only track/fully track, respectively.
arXiv Detail & Related papers (2021-07-27T06:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.