Comparison Of Deep Object Detectors On A New Vulnerable Pedestrian
Dataset
- URL: http://arxiv.org/abs/2212.06218v2
- Date: Mon, 12 Feb 2024 22:03:09 GMT
- Title: Comparison Of Deep Object Detectors On A New Vulnerable Pedestrian
Dataset
- Authors: Devansh Sharma, Tihitina Hade, Qing Tian
- Abstract summary: We introduce a new dataset for vulnerable pedestrian detection: the BG Vulnerable Pedestrian dataset.
This dataset consists of images collected from the public domain and manually-annotated bounding boxes.
On the proposed dataset, we have trained and tested five classic or state-of-the-art object detection models, i.e., YOLOv4, YOLOv5, YOLOX, Faster R-CNN, and EfficientDet.
- Score: 2.7624021966289605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pedestrian safety is one primary concern in autonomous driving. The
under-representation of vulnerable groups in today's pedestrian datasets points
to an urgent need for a dataset of vulnerable road users. In order to help
train comprehensive models and subsequently drive research to improve the
accuracy of vulnerable pedestrian identification, we first introduce a new
dataset for vulnerable pedestrian detection in this paper: the BG Vulnerable
Pedestrian (BGVP) dataset. The dataset includes four classes, i.e., Children
Without Disability, Elderly without Disability, With Disability, and
Non-Vulnerable. This dataset consists of images collected from the public
domain and manually-annotated bounding boxes. In addition, on the proposed
dataset, we have trained and tested five classic or state-of-the-art object
detection models, i.e., YOLOv4, YOLOv5, YOLOX, Faster R-CNN, and EfficientDet.
Our results indicate that YOLOX and YOLOv4 perform the best on our dataset,
YOLOv4 scoring 0.7999 and YOLOX scoring 0.7779 on the mAP 0.5 metric, while
YOLOX outperforms YOLOv4 by 3.8 percent on the mAP 0.5:0.95 metric. Generally
speaking, all five detectors do well predicting the With Disability class and
perform poorly in the Elderly Without Disability class. YOLOX consistently
outperforms all other detectors on the mAP (0.5:0.95) per class metric,
obtaining 0.5644, 0.5242, 0.4781, and 0.6796 for Children Without Disability,
Elderly Without Disability, Non-vulnerable, and With Disability, respectively.
Our dataset and codes are available at https://github.com/devvansh1997/BGVP.
Related papers
- YOLOv10 for Automated Fracture Detection in Pediatric Wrist Trauma X-rays [2.4554686192257424]
This study is the first to evaluate various YOLOv10 variants to assess their performance in detecting pediatric wrist fractures.
It investigates how changes in model complexity, scaling the architecture, and implementing a dual-label assignment strategy can enhance detection performance.
arXiv Detail & Related papers (2024-07-22T14:54:51Z) - Global Context Modeling in YOLOv8 for Pediatric Wrist Fracture Detection [0.0]
Children often suffer wrist injuries in daily life, while fracture injuring radiologists need to analyze and interpret X-ray images before surgical treatment.
The development of deep learning has enabled neural network models to work as computer-assisted diagnosis (CAD) tools.
This paper proposes the YOLOv8 model for fracture detection, which is an improved version of the YOLOv8 model with the GC block.
arXiv Detail & Related papers (2024-07-03T14:36:07Z) - YOLOv10: Real-Time End-to-End Object Detection [68.28699631793967]
YOLOs have emerged as the predominant paradigm in the field of real-time object detection.
The reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs.
We introduce the holistic efficiency-accuracy driven model design strategy for YOLOs.
arXiv Detail & Related papers (2024-05-23T11:44:29Z) - YOLO-World: Real-Time Open-Vocabulary Object Detection [87.08732047660058]
We introduce YOLO-World, an innovative approach that enhances YOLO with open-vocabulary detection capabilities.
Our method excels in detecting a wide range of objects in a zero-shot manner with high efficiency.
YOLO-World achieves 35.4 AP with 52.0 FPS on V100, which outperforms many state-of-the-art methods in terms of both accuracy and speed.
arXiv Detail & Related papers (2024-01-30T18:59:38Z) - Investigating YOLO Models Towards Outdoor Obstacle Detection For
Visually Impaired People [3.4628430044380973]
Seven different YOLO object detection models were implemented.
YOLOv8 was found to be the best model, which reached a precision of $80%$ and a recall of $68.2%$ on a well-known Obstacle dataset.
YOLO-NAS was found to be suboptimal for the obstacle detection task.
arXiv Detail & Related papers (2023-12-10T13:16:22Z) - Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls
and New Benchmarking [66.83273589348758]
Link prediction attempts to predict whether an unseen edge exists based on only a portion of edges of a graph.
A flurry of methods have been introduced in recent years that attempt to make use of graph neural networks (GNNs) for this task.
New and diverse datasets have also been created to better evaluate the effectiveness of these new models.
arXiv Detail & Related papers (2023-06-18T01:58:59Z) - EdgeYOLO: An Edge-Real-Time Object Detector [69.41688769991482]
This paper proposes an efficient, low-complexity and anchor-free object detector based on the state-of-the-art YOLO framework.
We develop an enhanced data augmentation method to effectively suppress overfitting during training, and design a hybrid random loss function to improve the detection accuracy of small objects.
Our baseline model can reach the accuracy of 50.6% AP50:95 and 69.8% AP50 in MS 2017 dataset, 26.4% AP50:95 and 44.8% AP50 in VisDrone 2019-DET dataset, and it meets real-time requirements (FPS>=30) on edge-computing device Nvidia
arXiv Detail & Related papers (2023-02-15T06:05:14Z) - Automatic Cattle Identification using YOLOv5 and Mosaic Augmentation: A
Comparative Analysis [2.161241370008739]
This paper investigates the YOLOv5 model to identify cattle in the yards.
Muzzle patterns in cattle are unique biometric solutions like a fingerprint in humans.
arXiv Detail & Related papers (2022-10-21T13:13:40Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z) - COVID-19 Detection Using CT Image Based On YOLOv5 Network [31.848436570442704]
The dataset provided by Kaggle platform and we choose YOLOv5 as our model.
We introduce some methods on objective detection in the related work section.
The objection detection can be divided into two streams: onestage and two stage.
arXiv Detail & Related papers (2022-01-24T21:50:58Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.