Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking
- URL: http://arxiv.org/abs/2308.00783v2
- Date: Sat, 20 Jan 2024 08:06:05 GMT
- Title: Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking
- Authors: Mingzhan Yang, Guangxin Han, Bin Yan, Wenhua Zhang, Jinqing Qi,
Huchuan Lu, Dong Wang
- Abstract summary: Multi-Object Tracking (MOT) aims to detect and associate all desired objects across frames.
In this paper, we demonstrate this long-standing challenge in MOT can be efficiently and effectively resolved by incorporating weak cues.
Our method Hybrid-SORT achieves superior performance on diverse benchmarks, including MOT17, MOT20, and especially DanceTrack.
- Score: 51.16677396148247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-Object Tracking (MOT) aims to detect and associate all desired objects
across frames. Most methods accomplish the task by explicitly or implicitly
leveraging strong cues (i.e., spatial and appearance information), which
exhibit powerful instance-level discrimination. However, when object occlusion
and clustering occur, spatial and appearance information will become ambiguous
simultaneously due to the high overlap among objects. In this paper, we
demonstrate this long-standing challenge in MOT can be efficiently and
effectively resolved by incorporating weak cues to compensate for strong cues.
Along with velocity direction, we introduce the confidence and height state as
potential weak cues. With superior performance, our method still maintains
Simple, Online and Real-Time (SORT) characteristics. Also, our method shows
strong generalization for diverse trackers and scenarios in a plug-and-play and
training-free manner. Significant and consistent improvements are observed when
applying our method to 5 different representative trackers. Further, with both
strong and weak cues, our method Hybrid-SORT achieves superior performance on
diverse benchmarks, including MOT17, MOT20, and especially DanceTrack where
interaction and severe occlusion frequently happen with complex motions. The
code and models are available at https://github.com/ymzis69/HybridSORT.
Related papers
- Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - Hierarchical IoU Tracking based on Interval [21.555469501789577]
Multi-Object Tracking (MOT) aims to detect and associate all targets of given classes across frames.
We propose the Hierarchical IoU Tracking framework, dubbed HIT, which achieves unified hierarchical tracking by utilizing tracklet intervals as priors.
Our method achieves promising performance on four datasets, i.e., MOT17, KITTI, DanceTrack and VisDrone.
arXiv Detail & Related papers (2024-06-19T07:03:18Z) - Deciphering Movement: Unified Trajectory Generation Model for Multi-Agent [53.637837706712794]
We propose a Unified Trajectory Generation model, UniTraj, that processes arbitrary trajectories as masked inputs.
Specifically, we introduce a Ghost Spatial Masking (GSM) module embedded within a Transformer encoder for spatial feature extraction.
We benchmark three practical sports game datasets, Basketball-U, Football-U, and Soccer-U, for evaluation.
arXiv Detail & Related papers (2024-05-27T22:15:23Z) - MotionTrack: Learning Robust Short-term and Long-term Motions for
Multi-Object Tracking [56.92165669843006]
We propose MotionTrack, which learns robust short-term and long-term motions in a unified framework to associate trajectories from a short to long range.
For dense crowds, we design a novel Interaction Module to learn interaction-aware motions from short-term trajectories, which can estimate the complex movement of each target.
For extreme occlusions, we build a novel Refind Module to learn reliable long-term motions from the target's history trajectory, which can link the interrupted trajectory with its corresponding detection.
arXiv Detail & Related papers (2023-03-18T12:38:33Z) - Unifying Short and Long-Term Tracking with Graph Hierarchies [0.0]
We introduce SUSHI, a unified and scalable multi-object tracker.
Our approach processes long clips by splitting them into a hierarchy of subclips, which enables high scalability.
We leverage graph neural networks to process all levels of the hierarchy, which makes our model unified across temporal scales and highly general.
arXiv Detail & Related papers (2022-12-06T15:12:53Z) - Online Multiple Object Tracking with Cross-Task Synergy [120.70085565030628]
We propose a novel unified model with synergy between position prediction and embedding association.
The two tasks are linked by temporal-aware target attention and distractor attention, as well as identity-aware memory aggregation model.
arXiv Detail & Related papers (2021-04-01T10:19:40Z) - DEFT: Detection Embeddings for Tracking [3.326320568999945]
We propose an efficient joint detection and tracking model named DEFT.
Our approach relies on an appearance-based object matching network jointly-learned with an underlying object detection network.
DEFT has comparable accuracy and speed to the top methods on 2D online tracking leaderboards.
arXiv Detail & Related papers (2021-02-03T20:00:44Z) - MAT: Motion-Aware Multi-Object Tracking [9.098793914779161]
In this paper, we propose Motion-Aware Tracker (MAT), focusing more on various motion patterns of different objects.
Experiments on MOT16 and MOT17 challenging benchmarks demonstrate that our MAT approach can achieve the superior performance by a large margin.
arXiv Detail & Related papers (2020-09-10T11:51:33Z) - SoDA: Multi-Object Tracking with Soft Data Association [75.39833486073597]
Multi-object tracking (MOT) is a prerequisite for a safe deployment of self-driving cars.
We propose a novel approach to MOT that uses attention to compute track embeddings that encode dependencies between observed objects.
arXiv Detail & Related papers (2020-08-18T03:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.