VariabilityTrack:Multi-Object Tracking with Variable Speed Object
Movement
- URL: http://arxiv.org/abs/2203.06424v3
- Date: Mon, 1 Jan 2024 08:50:45 GMT
- Title: VariabilityTrack:Multi-Object Tracking with Variable Speed Object
Movement
- Authors: Run Luo, JinLin Wei, and Qiao Lin
- Abstract summary: Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos.
We propose a variable speed Kalman filter algorithm based on environmental feedback and improve the matching process.
- Score: 1.6385815610837167
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-object tracking (MOT) aims at estimating bounding boxes and identities
of objects in videos. Most methods can be roughly classified as
tracking-by-detection and joint-detection-association paradigms. Although the
latter has elicited more attention and demonstrates comparable performance
relative than the former, we claim that the tracking-by-detection paradigm is
still the optimal solution in terms of tracking accuracy,such as
ByteTrack,which achieves 80.3 MOTA, 77.3 IDF1 and 63.1 HOTA on the test set of
MOT17 with 30 FPS running speed on a single V100 GPU.However, under complex
perspectives such as vehicle and UAV acceleration, the performance of such a
tracker using uniform Kalman filter will be greatly affected, resulting in
tracking loss.In this paper, we propose a variable speed Kalman filter
algorithm based on environmental feedback and improve the matching process,
which can greatly improve the tracking effect in complex variable speed scenes
while maintaining high tracking accuracy in relatively static scenes.
Eventually, higher MOTA and IDF1 results can be achieved on MOT17 test set than
ByteTrack
Related papers
- Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - Collaborative Tracking Learning for Frame-Rate-Insensitive Multi-Object
Tracking [3.781471919731034]
Multi-object tracking (MOT) at low frame rates can reduce computational, storage and power overhead to better meet the constraints of edge devices.
We propose to explore collaborative tracking learning (ColTrack) for frame-rate-insensitive MOT in a query-based end-to-end manner.
arXiv Detail & Related papers (2023-08-11T02:25:58Z) - Iterative Scale-Up ExpansionIoU and Deep Features Association for
Multi-Object Tracking in Sports [26.33239898091364]
We propose a novel online and robust multi-object tracking approach named deep ExpansionIoU (Deep-EIoU) for sports scenarios.
Unlike conventional methods, we abandon the use of the Kalman filter and leverage the iterative scale-up ExpansionIoU and deep features for robust tracking in sports scenarios.
Our proposed method demonstrates remarkable effectiveness in tracking irregular motion objects, achieving a score of 77.2% on the SportsMOT dataset and 85.4% on the SoccerNet-Tracking dataset.
arXiv Detail & Related papers (2023-06-22T17:47:08Z) - SCTracker: Multi-object tracking with shape and confidence constraints [11.210661553388615]
This paper proposes a multi-object tracker based on shape constraint and confidence named SCTracker.
Intersection of Union distance with shape constraints is applied to calculate the cost matrix between tracks and detections.
The Kalman Filter based on the detection confidence is used to update the motion state to improve the tracking performance when the detection has low confidence.
arXiv Detail & Related papers (2023-05-16T15:18:42Z) - ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every
Detection Box [81.45219802386444]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects across video frames.
We propose a hierarchical data association strategy to mine the true objects in low-score detection boxes.
In 3D scenarios, it is much easier for the tracker to predict object velocities in the world coordinate.
arXiv Detail & Related papers (2023-03-27T15:35:21Z) - ByteTrack: Multi-Object Tracking by Associating Every Detection Box [51.93588012109943]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos.
Most methods obtain identities by associating detection boxes whose scores are higher than a threshold.
We present a simple, effective and generic association method, called BYTE, tracking BY associaTing every detection box instead of only the high score ones.
arXiv Detail & Related papers (2021-10-13T17:01:26Z) - Quasi-Dense Similarity Learning for Multiple Object Tracking [82.93471035675299]
We present Quasi-Dense Similarity Learning, which densely samples hundreds of region proposals on a pair of images for contrastive learning.
We can directly combine this similarity learning with existing detection methods to build Quasi-Dense Tracking (QDTrack)
arXiv Detail & Related papers (2020-06-11T17:57:12Z) - Tracking Objects as Points [83.9217787335878]
We present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art.
Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame.
CenterTrack is simple, online (no peeking into the future), and real-time.
arXiv Detail & Related papers (2020-04-02T17:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.