Object Detection and Tracking Algorithms for Vehicle Counting: A
Comparative Analysis
- URL: http://arxiv.org/abs/2007.16198v1
- Date: Fri, 31 Jul 2020 17:49:27 GMT
- Title: Object Detection and Tracking Algorithms for Vehicle Counting: A
Comparative Analysis
- Authors: Vishal Mandal and Yaw Adu-Gyamfi
- Abstract summary: Authors deploy several state of the art object detection and tracking algorithms to detect and track different classes of vehicles.
Model combinations are validated and compared against the manually counted ground truths of over 9 hours' traffic video data.
Results demonstrate that the combination of CenterNet and Deep SORT, Detectron2 and Deep SORT, and YOLOv4 and Deep SORT produced the best overall counting percentage for all vehicles.
- Score: 3.093890460224435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid advancement in the field of deep learning and high performance
computing has highly augmented the scope of video based vehicle counting
system. In this paper, the authors deploy several state of the art object
detection and tracking algorithms to detect and track different classes of
vehicles in their regions of interest (ROI). The goal of correctly detecting
and tracking vehicles' in their ROI is to obtain an accurate vehicle count.
Multiple combinations of object detection models coupled with different
tracking systems are applied to access the best vehicle counting framework. The
models' addresses challenges associated to different weather conditions,
occlusion and low-light settings and efficiently extracts vehicle information
and trajectories through its computationally rich training and feedback cycles.
The automatic vehicle counts resulting from all the model combinations are
validated and compared against the manually counted ground truths of over 9
hours' traffic video data obtained from the Louisiana Department of
Transportation and Development. Experimental results demonstrate that the
combination of CenterNet and Deep SORT, Detectron2 and Deep SORT, and YOLOv4
and Deep SORT produced the best overall counting percentage for all vehicles.
Related papers
- Track Anything Rapter(TAR) [0.0]
Track Anything Rapter (TAR) is designed to detect, segment, and track objects of interest based on user-provided multimodal queries.
TAR utilizes cutting-edge pre-trained models like DINO, CLIP, and SAM to estimate the relative pose of the queried object.
We showcase how the integration of these foundational models with a custom high-level control algorithm results in a highly stable and precise tracking system.
arXiv Detail & Related papers (2024-05-19T19:51:41Z) - Scrutinizing Data from Sky: An Examination of Its Veracity in Area Based Traffic Contexts [4.099117128714005]
The tool is widely used in developed countries where the traffic is homogenous and has lane-based movements.
The validation is done using various methods using Classified Volume Count (CVC), Space Mean Speeds (SMS) of individual vehicle classes.
The results are fairly accurate in the case of data taken from a bird eye view with least errors.
arXiv Detail & Related papers (2024-04-26T07:40:37Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Robust and Fast Vehicle Detection using Augmented Confidence Map [10.261351772602543]
We introduce the concept of augmentation which highlights the region of interest containing the vehicles.
The output of MR-MSER is supplied to fast CNN to generate a confidence map.
Unlike existing models that implement complicated models for vehicle detection, we explore the combination of a rough set and fuzzy-based models.
arXiv Detail & Related papers (2023-04-27T18:41:16Z) - Unsupervised Driving Event Discovery Based on Vehicle CAN-data [62.997667081978825]
This work presents a simultaneous clustering and segmentation approach for vehicle CAN-data that identifies common driving events in an unsupervised manner.
We evaluate our approach with a dataset of real Tesla Model 3 vehicle CAN-data and a two-hour driving session that we annotated with different driving events.
arXiv Detail & Related papers (2023-01-12T13:10:47Z) - Real-Time Accident Detection in Traffic Surveillance Using Deep Learning [0.8808993671472349]
This paper presents a new efficient framework for accident detection at intersections for traffic surveillance applications.
The proposed framework consists of three hierarchical steps, including efficient and accurate object detection based on the state-of-the-art YOLOv4 method.
The robustness of the proposed framework is evaluated using video sequences collected from YouTube with diverse illumination conditions.
arXiv Detail & Related papers (2022-08-12T19:07:20Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - SoDA: Multi-Object Tracking with Soft Data Association [75.39833486073597]
Multi-object tracking (MOT) is a prerequisite for a safe deployment of self-driving cars.
We propose a novel approach to MOT that uses attention to compute track embeddings that encode dependencies between observed objects.
arXiv Detail & Related papers (2020-08-18T03:40:25Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.