Deep OC-SORT: Multi-Pedestrian Tracking by Adaptive Re-Identification
- URL: http://arxiv.org/abs/2302.11813v1
- Date: Thu, 23 Feb 2023 06:51:07 GMT
- Title: Deep OC-SORT: Multi-Pedestrian Tracking by Adaptive Re-Identification
- Authors: Gerard Maggiolino, Adnan Ahmad, Jinkun Cao, Kris Kitani
- Abstract summary: We propose a novel way to leverage objects' appearances to integrate appearance matching into motion-based methods.
Building upon the pure motion-based method OC-SORT, we achieve 1st place on MOT20 and 2nd place on MOT17 with 63.9 and 64.9 HOTA, respectively.
- Score: 22.017074242428205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motion-based association for Multi-Object Tracking (MOT) has recently
re-achieved prominence with the rise of powerful object detectors. Despite
this, little work has been done to incorporate appearance cues beyond simple
heuristic models that lack robustness to feature degradation. In this paper, we
propose a novel way to leverage objects' appearances to adaptively integrate
appearance matching into existing high-performance motion-based methods.
Building upon the pure motion-based method OC-SORT, we achieve 1st place on
MOT20 and 2nd place on MOT17 with 63.9 and 64.9 HOTA, respectively. We also
achieve 61.3 HOTA on the challenging DanceTrack benchmark as a new
state-of-the-art even compared to more heavily-designed methods. The code and
models are available at \url{https://github.com/GerardMaggiolino/Deep-OC-SORT}.
Related papers
- Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - ReIDTrack: Multi-Object Track and Segmentation Without Motion [18.892491706535793]
We consider whether we can achieve SOTA based on only high-performance detection and appearance model.
Our method wins 1st place on the MOTS track and wins 2nd on the MOTS track in the CVPR2023 WAD workshop.
arXiv Detail & Related papers (2023-08-03T08:53:23Z) - Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking [51.16677396148247]
Multi-Object Tracking (MOT) aims to detect and associate all desired objects across frames.
In this paper, we demonstrate this long-standing challenge in MOT can be efficiently and effectively resolved by incorporating weak cues.
Our method Hybrid-SORT achieves superior performance on diverse benchmarks, including MOT17, MOT20, and especially DanceTrack.
arXiv Detail & Related papers (2023-08-01T18:53:24Z) - SparseTrack: Multi-Object Tracking by Performing Scene Decomposition
based on Pseudo-Depth [84.64121608109087]
We propose a pseudo-depth estimation method for obtaining the relative depth of targets from 2D images.
Secondly, we design a depth cascading matching (DCM) algorithm, which can use the obtained depth information to convert a dense target set into multiple sparse target subsets.
By integrating the pseudo-depth method and the DCM strategy into the data association process, we propose a new tracker, called SparseTrack.
arXiv Detail & Related papers (2023-06-08T14:36:10Z) - Rt-Track: Robust Tricks for Multi-Pedestrian Tracking [4.271127739716044]
We propose a novel direction consistency method for smooth trajectory prediction (STP-DC) to increase the modeling of motion information.
We also propose a hyper-grain feature embedding network (HG-FEN) to enhance the modeling of appearance models.
To achieve state-of-the-art performance in MOT, we propose a robust tracker named Rt-track, incorporating various tricks and techniques.
arXiv Detail & Related papers (2023-03-16T22:08:29Z) - SMILEtrack: SiMIlarity LEarning for Occlusion-Aware Multiple Object
Tracking [20.286114226299237]
This paper introduces SMILEtrack, an innovative object tracker with a Siamese network-based Similarity Learning Module (SLM)
The SLM calculates the appearance similarity between two objects, overcoming the limitations of feature descriptors in Separate Detection and Embedding models.
Second, we develop a Similarity Matching Cascade (SMC) module with a novel GATE function for robust object matching across consecutive video frames.
arXiv Detail & Related papers (2022-11-16T10:49:48Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - Observation-Centric SORT: Rethinking SORT for Robust Multi-Object
Tracking [32.32109475782992]
We show that a simple motion model can obtain state-of-the-art tracking performance without other cues like appearance.
We thus name the proposed method as Observation-Centric SORT, OC-SORT for short.
arXiv Detail & Related papers (2022-03-27T17:57:08Z) - Probabilistic Tracklet Scoring and Inpainting for Multiple Object
Tracking [83.75789829291475]
We introduce a probabilistic autoregressive motion model to score tracklet proposals.
This is achieved by training our model to learn the underlying distribution of natural tracklets.
Our experiments demonstrate the superiority of our approach at tracking objects in challenging sequences.
arXiv Detail & Related papers (2020-12-03T23:59:27Z) - ArTIST: Autoregressive Trajectory Inpainting and Scoring for Tracking [80.02322563402758]
One of the core components in online multiple object tracking (MOT) frameworks is associating new detections with existing tracklets.
We introduce a probabilistic autoregressive generative model to score tracklet proposals by directly measuring the likelihood that a tracklet represents natural motion.
arXiv Detail & Related papers (2020-04-16T06:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.