Comparative study of multi-person tracking methods
- URL: http://arxiv.org/abs/2310.04825v2
- Date: Thu, 12 Oct 2023 12:05:15 GMT
- Title: Comparative study of multi-person tracking methods
- Authors: Denis Mbey Akola
- Abstract summary: The purpose of this study is to discover the techniques used and to provide useful insights about these algorithms in the tracking pipeline.
We trained our own Pedestrian Detection model using the MOT17Det dataset.
We then present experimental results which shows that Tracktor++ is a better multi-person tracking algorithm than SORT.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a study of two tracking algorithms (SORT~\cite{7533003}
and Tracktor++~\cite{2019}) that were ranked first positions on the MOT
Challenge leaderboard (The MOTChallenge web page: https://motchallenge.net ).
The purpose of this study is to discover the techniques used and to provide
useful insights about these algorithms in the tracking pipeline that could
improve the performance of MOT tracking algorithms. To this end, we adopted the
popular tracking-by-detection approach. We trained our own Pedestrian Detection
model using the MOT17Det dataset (MOT17Det :
https://motchallenge.net/data/MOT17Det/ ). We also used a re-identification
model trained on MOT17 dataset (MOT17 : https://motchallenge.net/data/MOT17/ )
for Tracktor++ to reduce the false re-identification alarms. We then present
experimental results which shows that Tracktor++ is a better multi-person
tracking algorithm than SORT. We also performed ablation studies to discover
the contribution of re-identification(RE-ID) network and motion to the results
of Tracktor++. We finally conclude by providing some recommendations for future
research.
Related papers
- Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - Detection-aware multi-object tracking evaluation [1.7880586070278561]
We propose a novel performance measure, named Tracking Effort Measure (TEM), to evaluate trackers that use different detectors.
TEM can quantify the effort done by the tracker with a reduced correlation on the input detections.
arXiv Detail & Related papers (2022-12-16T15:35:34Z) - EnsembleMOT: A Step towards Ensemble Learning of Multiple Object
Tracking [18.741196817925534]
Multiple Object Tracking (MOT) has rapidly progressed in recent years.
We propose a simple but effective ensemble method for MOT, called EnsembleMOT.
Our method is model-independent and doesn't need the learning procedure.
arXiv Detail & Related papers (2022-10-11T09:18:01Z) - Bag of Tricks for Domain Adaptive Multi-Object Tracking [4.084199842578325]
The proposed method was built from pre-existing detector and tracker under the tracking-by-detection paradigm.
The tracker we used is an online tracker that merely links newly received detections with existing tracks.
Our method, SIA_Track, takes the first place on MOT Synth2MOT17 track at BMTT 2022 challenge.
arXiv Detail & Related papers (2022-05-31T08:49:20Z) - StrongSORT: Make DeepSORT Great Again [19.099510933467148]
We revisit the classic tracker DeepSORT and upgrade it from various aspects, i.e., detection, embedding and association.
The resulting tracker, called StrongSORT, sets new HOTA and IDF1 records on MOT17 and MOT20.
We present two lightweight and plug-and-play algorithms to further refine the tracking results.
arXiv Detail & Related papers (2022-02-28T02:37:19Z) - Probabilistic Tracklet Scoring and Inpainting for Multiple Object
Tracking [83.75789829291475]
We introduce a probabilistic autoregressive motion model to score tracklet proposals.
This is achieved by training our model to learn the underlying distribution of natural tracklets.
Our experiments demonstrate the superiority of our approach at tracking objects in challenging sequences.
arXiv Detail & Related papers (2020-12-03T23:59:27Z) - Simultaneous Detection and Tracking with Motion Modelling for Multiple
Object Tracking [94.24393546459424]
We introduce Deep Motion Modeling Network (DMM-Net) that can estimate multiple objects' motion parameters to perform joint detection and association.
DMM-Net achieves PR-MOTA score of 12.80 @ 120+ fps for the popular UA-DETRAC challenge, which is better performance and orders of magnitude faster.
We also contribute a synthetic large-scale public dataset Omni-MOT for vehicle tracking that provides precise ground-truth annotations.
arXiv Detail & Related papers (2020-08-20T08:05:33Z) - ArTIST: Autoregressive Trajectory Inpainting and Scoring for Tracking [80.02322563402758]
One of the core components in online multiple object tracking (MOT) frameworks is associating new detections with existing tracklets.
We introduce a probabilistic autoregressive generative model to score tracklet proposals by directly measuring the likelihood that a tracklet represents natural motion.
arXiv Detail & Related papers (2020-04-16T06:43:11Z) - Tracking by Instance Detection: A Meta-Learning Approach [99.66119903655711]
We propose a principled three-step approach to build a high-performance tracker.
We build two trackers, named Retina-MAML and FCOS-MAML, based on two modern detectors RetinaNet and FCOS.
Both trackers run in real-time at 40 FPS.
arXiv Detail & Related papers (2020-04-02T05:55:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.