BadODD: Bangladeshi Autonomous Driving Object Detection Dataset
- URL: http://arxiv.org/abs/2401.10659v1
- Date: Fri, 19 Jan 2024 12:26:51 GMT
- Title: BadODD: Bangladeshi Autonomous Driving Object Detection Dataset
- Authors: Mirza Nihal Baig, Rony Hajong, Mahdi Murshed Patwary, Mohammad
Shahidur Rahman, Husne Ara Chowdhury
- Abstract summary: We propose a comprehensive dataset for object detection in diverse driving environments across 9 districts in Bangladesh.
The dataset, collected exclusively from smartphone cameras, provided a realistic representation of real-world scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a comprehensive dataset for object detection in diverse driving
environments across 9 districts in Bangladesh. The dataset, collected
exclusively from smartphone cameras, provided a realistic representation of
real-world scenarios, including day and night conditions. Most existing
datasets lack suitable classes for autonomous navigation on Bangladeshi roads,
making it challenging for researchers to develop models that can handle the
intricacies of road scenarios. To address this issue, the authors proposed a
new set of classes based on characteristics rather than local vehicle names.
The dataset aims to encourage the development of models that can handle the
unique challenges of Bangladeshi road scenarios for the effective deployment of
autonomous vehicles. The dataset did not consist of any online images to
simulate real-world conditions faced by autonomous vehicles. The classification
of vehicles is challenging because of the diverse range of vehicles on
Bangladeshi roads, including those not found elsewhere in the world. The
proposed classification system is scalable and can accommodate future vehicles,
making it a valuable resource for researchers in the autonomous vehicle sector.
Related papers
- DrivingGen: A Comprehensive Benchmark for Generative Video World Models in Autonomous Driving [49.11389494068169]
We present DrivingGen, the first comprehensive benchmark for generative driving world models.<n>DrivingGen combines a diverse evaluation dataset curated from both driving datasets and internet-scale video sources.<n>General models look better but break physics, while driving-specific ones capture motion realistically but lag in visual quality.
arXiv Detail & Related papers (2026-01-04T13:36:21Z) - Spatial Retrieval Augmented Autonomous Driving [81.39665750557526]
Existing autonomous driving systems rely on onboard sensors for environmental perception.<n>We propose the spatial retrieval paradigm, introducing offline retrieved geographic images as an additional input.<n>We will open-source dataset curation code, data, and benchmarks for further study of this new autonomous driving paradigm.
arXiv Detail & Related papers (2025-12-07T14:40:49Z) - Evaluating YOLO Architectures: Implications for Real-Time Vehicle Detection in Urban Environments of Bangladesh [0.0]
Vehicle detection systems trained on Non-Bangladeshi datasets struggle to accurately identify local vehicle types in Bangladesh's unique road environments.<n>This study evaluates six YOLO model variants on a custom dataset featuring 29 distinct vehicle classes.
arXiv Detail & Related papers (2025-09-06T09:11:44Z) - FedRAV: Hierarchically Federated Region-Learning for Traffic Object Classification of Autonomous Vehicles [7.8896851741869085]
We propose a novel hierarchically Federated Region-learning framework of Autonomous Vehicles (FedRAV)
FedRAV adaptively divides a large area containing vehicles into sub-regions based on the defined region-wise distance, and achieves personalized vehicular models and regional models.
Experiment results demonstrate that our framework outperforms those known algorithms, and improves the accuracy by at least 3.69%.
arXiv Detail & Related papers (2024-11-21T09:45:55Z) - DiffRoad: Realistic and Diverse Road Scenario Generation for Autonomous Vehicle Testing [12.964224581549281]
DiffRoad is a novel diffusion model designed to produce controllable and high-fidelity 3D road scenarios.
Road-UNet architecture optimize the balance between backbone and skip connections for high-realism scenario generation.
generated scenarios can be fully automated into the OpenDRIVE format.
arXiv Detail & Related papers (2024-11-14T13:56:02Z) - Pedestrian motion prediction evaluation for urban autonomous driving [0.0]
We analyze selected publications with provided open-source solutions to determine valuability of traditional motion prediction metrics.
This perspective should be valuable to any potential autonomous driving or robotics engineer looking for the real-world performance of the existing state-of-art pedestrian motion prediction problem.
arXiv Detail & Related papers (2024-10-22T10:06:50Z) - Finetuning YOLOv9 for Vehicle Detection: Deep Learning for Intelligent Transportation Systems in Dhaka, Bangladesh [0.0]
The government of Bangladesh recognizes the integration of ITS to ensure smart mobility as a vital step towards the development plan "Smart Bangladesh Vision 2041"
This paper proposes a fine-tuned object detector, the YOLOv9 model to detect native vehicles trained on a Bangladesh-based dataset.
Results show that the model achieved a mean Average Precision (mAP) of 0.934 at the Intersection over Union (IoU) threshold of 0.5.
arXiv Detail & Related papers (2024-09-29T02:33:34Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes [79.18349050238413]
Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios.
An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models.
We build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames.
arXiv Detail & Related papers (2022-10-23T23:03:17Z) - Data generation using simulation technology to improve perception
mechanism of autonomous vehicles [0.0]
We will demonstrate the effectiveness of combining data gathered from the real world with data generated in the simulated world to train perception systems.
We will also propose a multi-level deep learning perception framework that aims to emulate a human learning experience.
arXiv Detail & Related papers (2022-07-01T03:42:33Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.