Traffic-Aware Multi-Camera Tracking of Vehicles Based on ReID and Camera
Link Model
- URL: http://arxiv.org/abs/2008.09785v2
- Date: Sun, 30 Aug 2020 04:47:55 GMT
- Title: Traffic-Aware Multi-Camera Tracking of Vehicles Based on ReID and Camera
Link Model
- Authors: Hung-Min Hsu, Yizhou Wang, Jenq-Neng Hwang
- Abstract summary: Multi-target multi-camera tracking (MTMCT) is a crucial technique for smart city applications.
We propose an effective and reliable MTMCT framework for vehicles.
Our proposed MTMCT is evaluated on the CityFlow dataset and achieves a new state-of-the-art performance with IDF1 of 74.93%.
- Score: 43.850588717944916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-target multi-camera tracking (MTMCT), i.e., tracking multiple targets
across multiple cameras, is a crucial technique for smart city applications. In
this paper, we propose an effective and reliable MTMCT framework for vehicles,
which consists of a traffic-aware single camera tracking (TSCT) algorithm, a
trajectory-based camera link model (CLM) for vehicle re-identification (ReID),
and a hierarchical clustering algorithm to obtain the cross camera vehicle
trajectories. First, the TSCT, which jointly considers vehicle appearance,
geometric features, and some common traffic scenarios, is proposed to track the
vehicles in each camera separately. Second, the trajectory-based CLM is adopted
to facilitate the relationship between each pair of adjacently connected
cameras and add spatio-temporal constraints for the subsequent vehicle ReID
with temporal attention. Third, the hierarchical clustering algorithm is used
to merge the vehicle trajectories among all the cameras to obtain the final
MTMCT results. Our proposed MTMCT is evaluated on the CityFlow dataset and
achieves a new state-of-the-art performance with IDF1 of 74.93%.
Related papers
- City-Scale Multi-Camera Vehicle Tracking System with Improved Self-Supervised Camera Link Model [0.0]
This article introduces an innovative multi-camera vehicle tracking system that utilizes a self-supervised camera link model.
The proposed method achieves a new state-of-the-art among automatic camera-link based methods in CityFlow V2 benchmarks with 61.07% IDF1 Score.
arXiv Detail & Related papers (2024-05-18T17:28:35Z) - Multi-Object Tracking with Camera-LiDAR Fusion for Autonomous Driving [0.764971671709743]
The proposed MOT algorithm comprises a three-step association process, an Extended Kalman filter for estimating the motion of each detected dynamic obstacle, and a track management phase.
Unlike most state-of-the-art multi-modal MOT approaches, the proposed algorithm does not rely on maps or knowledge of the ego global pose.
The algorithm is validated both in simulation and with real-world data, with satisfactory results.
arXiv Detail & Related papers (2024-03-06T23:49:16Z) - Multi-target multi-camera vehicle tracking using transformer-based
camera link model and spatial-temporal information [29.34298951501007]
Multi-target multi-camera tracking of vehicles, i.e. tracking vehicles across multiple cameras, is a crucial application for the development of smart city and intelligent traffic system.
Main challenges of MTMCT of vehicles include the intra-class variability of the same vehicle and inter-class similarity between different vehicles.
We propose a transformer-based camera link model with spatial and temporal filtering to conduct cross camera tracking.
arXiv Detail & Related papers (2023-01-18T22:27:08Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Know Your Surroundings: Panoramic Multi-Object Tracking by Multimodality
Collaboration [56.01625477187448]
We propose a MultiModality PAnoramic multi-object Tracking framework (MMPAT)
It takes both 2D panorama images and 3D point clouds as input and then infers target trajectories using the multimodality data.
We evaluate the proposed method on the JRDB dataset, where the MMPAT achieves the top performance in both the detection and tracking tasks.
arXiv Detail & Related papers (2021-05-31T03:16:38Z) - Multi-Target Multi-Camera Tracking of Vehicles using Metadata-Aided
Re-ID and Trajectory-Based Camera Link Model [32.01329933787149]
We propose a novel framework for multi-target multi-camera tracking of vehicles based on metadata-aided re-identification (MA-ReID) and the trajectory-based camera link model (TCLM)
The proposed method is evaluated on the CityFlow dataset, achieving IDF1 76.77%, which outperforms the state-of-the-art MTMCT methods.
arXiv Detail & Related papers (2021-05-03T23:20:37Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - Online Clustering-based Multi-Camera Vehicle Tracking in Scenarios with
overlapping FOVs [2.6365690297272617]
Multi-Target Multi-Camera (MTMC) vehicle tracking is an essential task of visual traffic monitoring.
We present a new low-latency online approach for MTMC tracking in scenarios with partially overlapping fields of view.
arXiv Detail & Related papers (2021-02-08T09:55:55Z) - Towards Autonomous Driving: a Multi-Modal 360$^{\circ}$ Perception
Proposal [87.11988786121447]
This paper presents a framework for 3D object detection and tracking for autonomous vehicles.
The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection.
A variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack.
arXiv Detail & Related papers (2020-08-21T20:36:21Z) - Dense Scene Multiple Object Tracking with Box-Plane Matching [73.54369833671772]
Multiple Object Tracking (MOT) is an important task in computer vision.
We propose the Box-Plane Matching (BPM) method to improve the MOT performacne in dense scenes.
With the effectiveness of the three modules, our team achieves the 1st place on the Track-1 leaderboard in the ACM MM Grand Challenge HiEve 2020.
arXiv Detail & Related papers (2020-07-30T16:39:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.