Performance Evaluation of Real-Time Object Detection for Electric Scooters
- URL: http://arxiv.org/abs/2405.03039v1
- Date: Sun, 5 May 2024 20:00:22 GMT
- Title: Performance Evaluation of Real-Time Object Detection for Electric Scooters
- Authors: Dong Chen, Arman Hosseini, Arik Smith, Amir Farzin Nikkhah, Arsalan Heydarian, Omid Shoghli, Bradford Campbell,
- Abstract summary: Electric scooters (e-scooters) have rapidly emerged as a popular mode of transportation in urban areas, yet they pose significant safety challenges.
This paper assesses the effectiveness and efficiency of cutting-edge object detectors designed for e-scooters.
The detection accuracy, measured in terms of mAP@0.5, ranges from 27.4% (YOLOv7-E6E) to 86.8% (YOLOv5s)
- Score: 9.218359701264797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electric scooters (e-scooters) have rapidly emerged as a popular mode of transportation in urban areas, yet they pose significant safety challenges. In the United States, the rise of e-scooters has been marked by a concerning increase in related injuries and fatalities. Recently, while deep-learning object detection holds paramount significance in autonomous vehicles to avoid potential collisions, its application in the context of e-scooters remains relatively unexplored. This paper addresses this gap by assessing the effectiveness and efficiency of cutting-edge object detectors designed for e-scooters. To achieve this, the first comprehensive benchmark involving 22 state-of-the-art YOLO object detectors, including five versions (YOLOv3, YOLOv5, YOLOv6, YOLOv7, and YOLOv8), has been established for real-time traffic object detection using a self-collected dataset featuring e-scooters. The detection accuracy, measured in terms of mAP@0.5, ranges from 27.4% (YOLOv7-E6E) to 86.8% (YOLOv5s). All YOLO models, particularly YOLOv3-tiny, have displayed promising potential for real-time object detection in the context of e-scooters. Both the traffic scene dataset (https://zenodo.org/records/10578641) and software program codes (https://github.com/DongChen06/ScooterDet) for model benchmarking in this study are publicly available, which will not only improve e-scooter safety with advanced object detection but also lay the groundwork for tailored solutions, promising a safer and more sustainable urban micromobility landscape.
Related papers
- YOLO-Vehicle-Pro: A Cloud-Edge Collaborative Framework for Object Detection in Autonomous Driving under Adverse Weather Conditions [8.820126303110545]
This paper proposes two innovative deep learning models: YOLO-Vehicle and YOLO-Vehicle-Pro.
YOLO-Vehicle is an object detection model tailored specifically for autonomous driving scenarios.
YOLO-Vehicle-Pro builds upon this foundation by introducing an improved image dehazing algorithm.
arXiv Detail & Related papers (2024-10-23T10:07:13Z) - YOLO-PPA based Efficient Traffic Sign Detection for Cruise Control in Autonomous Driving [10.103731437332693]
It is very important to detect traffic signs efficiently and accurately in autonomous driving systems.
Existing object detection algorithms can hardly detect these small scaled signs.
A YOLO PPA based traffic sign detection algorithm is proposed in this paper.
arXiv Detail & Related papers (2024-09-05T07:49:21Z) - Comparing YOLOv5 Variants for Vehicle Detection: A Performance Analysis [0.0]
This study provides a comparative analysis of five YOLOv5 variants, YOLOv5n6s, YOLOv5s6s, YOLOv5m6s, YOLOv5l6s, and YOLOv5x6s, for vehicle detection.
YOLOv5n6s demonstrated a strong balance between precision and recall, particularly in detecting Cars.
arXiv Detail & Related papers (2024-08-22T17:06:29Z) - Target Detection of Safety Protective Gear Using the Improved YOLOv5 [6.517811916515857]
We propose YOLO-EA, an innovative model that enhances safety measure detection by integrating ECA into its backbone's convolutional layers.
YOLO-EA's effectiveness was empirically substantiated using a dataset derived from real-world railway construction site surveillance footage.
arXiv Detail & Related papers (2024-08-12T07:33:11Z) - YOLOv10: Real-Time End-to-End Object Detection [68.28699631793967]
YOLOs have emerged as the predominant paradigm in the field of real-time object detection.
The reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs.
We introduce the holistic efficiency-accuracy driven model design strategy for YOLOs.
arXiv Detail & Related papers (2024-05-23T11:44:29Z) - YOLO-World: Real-Time Open-Vocabulary Object Detection [87.08732047660058]
We introduce YOLO-World, an innovative approach that enhances YOLO with open-vocabulary detection capabilities.
Our method excels in detecting a wide range of objects in a zero-shot manner with high efficiency.
YOLO-World achieves 35.4 AP with 52.0 FPS on V100, which outperforms many state-of-the-art methods in terms of both accuracy and speed.
arXiv Detail & Related papers (2024-01-30T18:59:38Z) - Investigating YOLO Models Towards Outdoor Obstacle Detection For
Visually Impaired People [3.4628430044380973]
Seven different YOLO object detection models were implemented.
YOLOv8 was found to be the best model, which reached a precision of $80%$ and a recall of $68.2%$ on a well-known Obstacle dataset.
YOLO-NAS was found to be suboptimal for the obstacle detection task.
arXiv Detail & Related papers (2023-12-10T13:16:22Z) - YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time
Object Detection [80.11152626362109]
We provide an efficient and performant object detector, termed YOLO-MS.
We train our YOLO-MS on the MS COCO dataset from scratch without relying on any other large-scale datasets.
Our work can also be used as a plug-and-play module for other YOLO models.
arXiv Detail & Related papers (2023-08-10T10:12:27Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Detecting Invisible People [58.49425715635312]
We re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects.
We demonstrate that current detection and tracking systems perform dramatically worse on this task.
Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks.
arXiv Detail & Related papers (2020-12-15T16:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.