Bangladeshi Native Vehicle Detection in Wild
- URL: http://arxiv.org/abs/2405.12150v1
- Date: Mon, 20 May 2024 16:23:40 GMT
- Title: Bangladeshi Native Vehicle Detection in Wild
- Authors: Bipin Saha, Md. Johirul Islam, Shaikh Khaled Mostaque, Aditya Bhowmik, Tapodhir Karmakar Taton, Md. Nakib Hayat Chowdhury, Mamun Bin Ibne Reaz,
- Abstract summary: This paper proposes a native vehicle detection dataset for the most commonly appeared vehicle classes in Bangladesh.
17 distinct vehicle classes have been taken into account, with fully annotated 81542 instances of 17326 images.
The experiments show that the BNVD dataset serves as a reliable representation of vehicle distribution.
- Score: 1.444899524297657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of autonomous navigation relies on robust and precise vehicle recognition, hindered by the scarcity of region-specific vehicle detection datasets, impeding the development of context-aware systems. To advance terrestrial object detection research, this paper proposes a native vehicle detection dataset for the most commonly appeared vehicle classes in Bangladesh. 17 distinct vehicle classes have been taken into account, with fully annotated 81542 instances of 17326 images. Each image width is set to at least 1280px. The dataset's average vehicle bounding box-to-image ratio is 4.7036. This Bangladesh Native Vehicle Dataset (BNVD) has accounted for several geographical, illumination, variety of vehicle sizes, and orientations to be more robust on surprised scenarios. In the context of examining the BNVD dataset, this work provides a thorough assessment with four successive You Only Look Once (YOLO) models, namely YOLO v5, v6, v7, and v8. These dataset's effectiveness is methodically evaluated and contrasted with other vehicle datasets already in use. The BNVD dataset exhibits mean average precision(mAP) at 50% intersection over union (IoU) is 0.848 corresponding precision and recall values of 0.841 and 0.774. The research findings indicate a mAP of 0.643 at an IoU range of 0.5 to 0.95. The experiments show that the BNVD dataset serves as a reliable representation of vehicle distribution and presents considerable complexities.
Related papers
- Evaluating YOLO Architectures: Implications for Real-Time Vehicle Detection in Urban Environments of Bangladesh [0.0]
Vehicle detection systems trained on Non-Bangladeshi datasets struggle to accurately identify local vehicle types in Bangladesh's unique road environments.<n>This study evaluates six YOLO model variants on a custom dataset featuring 29 distinct vehicle classes.
arXiv Detail & Related papers (2025-09-06T09:11:44Z) - VME: A Satellite Imagery Dataset and Benchmark for Detecting Vehicles in the Middle East and Beyond [9.576056095537563]
Vehicle detection in satellite images is crucial for traffic management, urban planning, and disaster response.<n>Current models struggle with real-world diversity, particularly across different regions.<n>We present the Vehicles in the Middle East (VME) dataset, designed explicitly for vehicle detection in high-resolution satellite images from Middle Eastern countries.
arXiv Detail & Related papers (2025-05-28T13:34:05Z) - A Real-Time DETR Approach to Bangladesh Road Object Detection for Autonomous Vehicles [0.0]
Detection Transformers has become a state of the art solution to object detection.
Real-time DETR models are shown to perform significantly better on inference times.
Our results gave a mAP50 score of 0.41518 in the public 60% test set, and 0.28194 in the private 40% test set.
arXiv Detail & Related papers (2024-11-22T18:21:20Z) - DEEGITS: Deep Learning based Framework for Measuring Heterogenous Traffic State in Challenging Traffic Scenarios [0.0]
This paper presents DEEGITS (Deep Heterogeneous Traffic State Measurement), a comprehensive framework that leverages state-of-the-art convolutional neural network (CNN) techniques to accurately and rapidly detect vehicles and pedestrians.
In this study, we enhance the training dataset through data fusion, enabling simultaneous detection vehicles and pedestrians.
The framework is tested to measure heterogeneous traffic states in mixed traffic conditions.
arXiv Detail & Related papers (2024-11-13T04:49:32Z) - ANNA: A Deep Learning Based Dataset in Heterogeneous Traffic for
Autonomous Vehicles [2.932123507260722]
This study discusses a custom-built dataset that includes some unidentified vehicles in the perspective of Bangladesh.
A dataset validity check was performed by evaluating models using the Intersection Over Union (IOU) metric.
The results demonstrated that the model trained on our custom dataset was more precise and efficient than the models trained on the KITTI or COCO dataset concerning Bangladeshi traffic.
arXiv Detail & Related papers (2024-01-21T01:14:04Z) - BadODD: Bangladeshi Autonomous Driving Object Detection Dataset [0.0]
We propose a comprehensive dataset for object detection in diverse driving environments across 9 districts in Bangladesh.
The dataset, collected exclusively from smartphone cameras, provided a realistic representation of real-world scenarios.
arXiv Detail & Related papers (2024-01-19T12:26:51Z) - Whole-body Detection, Recognition and Identification at Altitude and
Range [57.445372305202405]
We propose an end-to-end system evaluated on diverse datasets.
Our approach involves pre-training the detector on common image datasets and fine-tuning it on BRIAR's complex videos and images.
We conduct thorough evaluations under various conditions, such as different ranges and angles in indoor, outdoor, and aerial scenarios.
arXiv Detail & Related papers (2023-11-09T20:20:23Z) - DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - Performance Analysis of YOLO-based Architectures for Vehicle Detection
from Traffic Images in Bangladesh [0.0]
We find the best-suited YOLO architecture for fast and accurate vehicle detection from traffic images in Bangladesh.
Models were trained on a dataset containing 7390 images belonging to 21 types of vehicles.
We found the YOLOV5x variant to be the best-suited model, performing better than YOLOv3 and YOLOv5s models respectively by 7 & 4 percent in mAP, and 12 & 8.5 percent in terms of Accuracy.
arXiv Detail & Related papers (2022-12-18T18:53:35Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.