Online Clustering-based Multi-Camera Vehicle Tracking in Scenarios with
overlapping FOVs
- URL: http://arxiv.org/abs/2102.04091v1
- Date: Mon, 8 Feb 2021 09:55:55 GMT
- Title: Online Clustering-based Multi-Camera Vehicle Tracking in Scenarios with
overlapping FOVs
- Authors: Elena Luna, Juan C. SanMiguel, Jose M. Mart\'inez, and Marcos
Escudero-Vi\~nolo
- Abstract summary: Multi-Target Multi-Camera (MTMC) vehicle tracking is an essential task of visual traffic monitoring.
We present a new low-latency online approach for MTMC tracking in scenarios with partially overlapping fields of view.
- Score: 2.6365690297272617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-Target Multi-Camera (MTMC) vehicle tracking is an essential task of
visual traffic monitoring, one of the main research fields of Intelligent
Transportation Systems. Several offline approaches have been proposed to
address this task; however, they are not compatible with real-world
applications due to their high latency and post-processing requirements. In
this paper, we present a new low-latency online approach for MTMC tracking in
scenarios with partially overlapping fields of view (FOVs), such as road
intersections. Firstly, the proposed approach detects vehicles at each camera.
Then, the detections are merged between cameras by applying cross-camera
clustering based on appearance and location. Lastly, the clusters containing
different detections of the same vehicle are temporally associated to compute
the tracks on a frame-by-frame basis. The experiments show promising
low-latency results while addressing real-world challenges such as the a priori
unknown and time-varying number of targets and the continuous state estimation
of them without performing any post-processing of the trajectories.
Related papers
- MCTR: Multi Camera Tracking Transformer [45.66952089591361]
Multi-Camera Tracking tRansformer (MCTR) is a novel end-to-end approach tailored for multi-object detection and tracking across multiple cameras.
MCTR leverages end-to-end detectors like DEtector TRansformer (DETR) to produce detections and detection embeddings independently for each camera view.
The framework maintains set of track embeddings that encaplusate global information about the tracked objects, and updates them at every frame by integrating local information from the view-specific detection embeddings.
arXiv Detail & Related papers (2024-08-23T17:37:03Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - DIVOTrack: A Novel Dataset and Baseline Method for Cross-View
Multi-Object Tracking in DIVerse Open Scenes [74.64897845999677]
We introduce a new cross-view multi-object tracking dataset for DIVerse Open scenes with dense tracking pedestrians.
Our DIVOTrack has fifteen distinct scenarios and 953 cross-view tracks, surpassing all cross-view multi-object tracking datasets currently available.
Furthermore, we provide a novel baseline cross-view tracking method with a unified joint detection and cross-view tracking framework named CrossMOT.
arXiv Detail & Related papers (2023-02-15T14:10:42Z) - Real-Time Accident Detection in Traffic Surveillance Using Deep Learning [0.8808993671472349]
This paper presents a new efficient framework for accident detection at intersections for traffic surveillance applications.
The proposed framework consists of three hierarchical steps, including efficient and accurate object detection based on the state-of-the-art YOLOv4 method.
The robustness of the proposed framework is evaluated using video sequences collected from YouTube with diverse illumination conditions.
arXiv Detail & Related papers (2022-08-12T19:07:20Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Multi-Target Multi-Camera Tracking of Vehicles using Metadata-Aided
Re-ID and Trajectory-Based Camera Link Model [32.01329933787149]
We propose a novel framework for multi-target multi-camera tracking of vehicles based on metadata-aided re-identification (MA-ReID) and the trajectory-based camera link model (TCLM)
The proposed method is evaluated on the CityFlow dataset, achieving IDF1 76.77%, which outperforms the state-of-the-art MTMCT methods.
arXiv Detail & Related papers (2021-05-03T23:20:37Z) - Traffic-Aware Multi-Camera Tracking of Vehicles Based on ReID and Camera
Link Model [43.850588717944916]
Multi-target multi-camera tracking (MTMCT) is a crucial technique for smart city applications.
We propose an effective and reliable MTMCT framework for vehicles.
Our proposed MTMCT is evaluated on the CityFlow dataset and achieves a new state-of-the-art performance with IDF1 of 74.93%.
arXiv Detail & Related papers (2020-08-22T08:54:47Z) - Towards Autonomous Driving: a Multi-Modal 360$^{\circ}$ Perception
Proposal [87.11988786121447]
This paper presents a framework for 3D object detection and tracking for autonomous vehicles.
The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection.
A variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack.
arXiv Detail & Related papers (2020-08-21T20:36:21Z) - Tracking Passengers and Baggage Items using Multi-camera Systems at
Security Checkpoints [0.7424262881242935]
We introduce a novel tracking-by-detection framework to track multiple objects in overhead camera videos for airport checkpoint security scenarios.
Our approach improves object detection by employing a test-time data augmentation procedure.
An evaluation of detection, tracking, and association performances on videos obtained from multiple overhead cameras in a realistic airport checkpoint environment demonstrates the effectiveness of the proposed approach.
arXiv Detail & Related papers (2020-07-15T18:09:31Z) - Tracking Road Users using Constraint Programming [79.32806233778511]
We present a constraint programming (CP) approach for the data association phase found in the tracking-by-detection paradigm of the multiple object tracking (MOT) problem.
Our proposed method was tested on a motorized vehicles tracking dataset and produces results that outperform the top methods of the UA-DETRAC benchmark.
arXiv Detail & Related papers (2020-03-10T00:04:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.