Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking
- URL: http://arxiv.org/abs/2204.07442v1
- Date: Fri, 15 Apr 2022 12:47:01 GMT
- Title: Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking
- Authors: Pirazh Khorramshahi, Vineet Shenoy, Michael Pack, Rama Chellappa
- Abstract summary: We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
- Score: 58.95210121654722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-camera vehicle tracking is one of the most complicated tasks in
Computer Vision as it involves distinct tasks including Vehicle Detection,
Tracking, and Re-identification. Despite the challenges, multi-camera vehicle
tracking has immense potential in transportation applications including speed,
volume, origin-destination (O-D), and routing data generation. Several recent
works have addressed the multi-camera tracking problem. However, most of the
effort has gone towards improving accuracy on high-quality benchmark datasets
while disregarding lower camera resolutions, compression artifacts and the
overwhelming amount of computational power and time needed to carry out this
task on its edge and thus making it prohibitive for large-scale and real-time
deployment. Therefore, in this work we shed light on practical issues that
should be addressed for the design of a multi-camera tracking system to provide
actionable and timely insights. Moreover, we propose a real-time city-scale
multi-camera vehicle tracking system that compares favorably to computationally
intensive alternatives and handles real-world, low-resolution CCTV instead of
idealized and curated video streams. To show its effectiveness, in addition to
integration into the Regional Integrated Transportation Information System
(RITIS), we participated in the 2021 NVIDIA AI City multi-camera tracking
challenge and our method is ranked among the top five performers on the public
leaderboard.
Related papers
- DELTA: Dense Efficient Long-range 3D Tracking for any video [82.26753323263009]
We introduce DELTA, a novel method that efficiently tracks every pixel in 3D space, enabling accurate motion estimation across entire videos.
Our approach leverages a joint global-local attention mechanism for reduced-resolution tracking, followed by a transformer-based upsampler to achieve high-resolution predictions.
Our method provides a robust solution for applications requiring fine-grained, long-term motion tracking in 3D space.
arXiv Detail & Related papers (2024-10-31T17:59:01Z) - MTMMC: A Large-Scale Real-World Multi-Modal Camera Tracking Benchmark [63.878793340338035]
Multi-target multi-camera tracking is a crucial task that involves identifying and tracking individuals over time using video streams from multiple cameras.
Existing datasets for this task are either synthetically generated or artificially constructed within a controlled camera network setting.
We present MTMMC, a real-world, large-scale dataset that includes long video sequences captured by 16 multi-modal cameras in two different environments.
arXiv Detail & Related papers (2024-03-29T15:08:37Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Towards Effective Multi-Moving-Camera Tracking: A New Dataset and Lightweight Link Model [4.581852145863394]
Multi-target multi-camera (MTMC) tracking systems are composed of two modules: single-camera tracking (SCT) and inter-camera tracking (ICT)
MTMC tracking has been a very complicated task, while tracking across multiple moving cameras makes it even more challenging.
Linker is proposed to mitigate the identity switch by associating two disjoint tracklets of the same target into a complete trajectory within the same camera.
arXiv Detail & Related papers (2023-12-18T09:11:28Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - The Interstate-24 3D Dataset: a new benchmark for 3D multi-camera
vehicle tracking [4.799822253865053]
This work presents a novel video dataset recorded from overlapping highway traffic cameras along an urban interstate, enabling multi-camera 3D object tracking in a traffic monitoring context.
Data is released from 3 scenes containing video from at least 16 cameras each, totaling 57 minutes in length.
877,000 3D bounding boxes and corresponding object tracklets are fully and accurately annotated for each camera field of view and are combined into a spatially and temporally continuous set of vehicle trajectories for each scene.
arXiv Detail & Related papers (2023-08-28T18:43:33Z) - CXTrack: Improving 3D Point Cloud Tracking with Contextual Information [59.55870742072618]
3D single object tracking plays an essential role in many applications, such as autonomous driving.
We propose CXTrack, a novel transformer-based network for 3D object tracking.
We show that CXTrack achieves state-of-the-art tracking performance while running at 29 FPS.
arXiv Detail & Related papers (2022-11-12T11:29:01Z) - Synthehicle: Multi-Vehicle Multi-Camera Tracking in Virtual Cities [4.4855664250147465]
We present a massive synthetic dataset for multiple vehicle tracking and segmentation in multiple overlapping and non-overlapping camera views.
The dataset consists of 17 hours of labeled video material, recorded from 340 cameras in 64 diverse day, rain, dawn, and night scenes.
arXiv Detail & Related papers (2022-08-30T11:36:07Z) - LMGP: Lifted Multicut Meets Geometry Projections for Multi-Camera
Multi-Object Tracking [42.87953709286856]
Multi-Camera Multi-Object Tracking is currently drawing attention in the computer vision field due to its superior performance in real-world applications.
We propose a mathematically elegant multi-camera multiple object tracking approach based on a spatial-temporal lifted multicut formulation.
arXiv Detail & Related papers (2021-11-23T14:09:47Z) - Real-time 3D Deep Multi-Camera Tracking [13.494550690138775]
We propose a novel end-to-end tracking pipeline, Deep Multi-Camera Tracking (DMCT), which achieves reliable real-time multi-camera people tracking.
Our system achieves the state-of-the-art tracking results while maintaining real-time performance.
arXiv Detail & Related papers (2020-03-26T06:08:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.