GMOT-40: A Benchmark for Generic Multiple Object Tracking
- URL: http://arxiv.org/abs/2011.11858v3
- Date: Wed, 7 Apr 2021 19:13:00 GMT
- Title: GMOT-40: A Benchmark for Generic Multiple Object Tracking
- Authors: Hexin Bai, Wensheng Cheng, Peng Chu, Juehuan Liu, Kai Zhang, Haibin
Ling
- Abstract summary: We make contributions to boost the study of Generic Multiple Object Tracking (GMOT) in three aspects.
First, we construct the first public GMOT dataset, dubbed GMOT-40, which contains 40 carefully annotated sequences evenly distributed among 10 object categories.
Second, by noting the lack of devoted tracking algorithms, we have designed a series of baseline GMOT algorithms.
Third, we perform a thorough evaluation on GMOT-40, involving popular MOT algorithms and the proposed baselines.
- Score: 65.80411267046786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple Object Tracking (MOT) has witnessed remarkable advances in recent
years. However, existing studies dominantly request prior knowledge of the
tracking target, and hence may not generalize well to unseen categories. In
contrast, Generic Multiple Object Tracking (GMOT), which requires little prior
information about the target, is largely under-explored. In this paper, we make
contributions to boost the study of GMOT in three aspects. First, we construct
the first public GMOT dataset, dubbed GMOT-40, which contains 40 carefully
annotated sequences evenly distributed among 10 object categories. In addition,
two tracking protocols are adopted to evaluate different characteristics of
tracking algorithms. Second, by noting the lack of devoted tracking algorithms,
we have designed a series of baseline GMOT algorithms. Third, we perform a
thorough evaluation on GMOT-40, involving popular MOT algorithms (with
necessary modifications) and the proposed baselines. We will release the
GMOT-40 benchmark, the evaluation results, as well as the baseline algorithm to
the public upon the publication of the paper.
Related papers
- Enhanced Kalman with Adaptive Appearance Motion SORT for Grounded Generic Multiple Object Tracking [0.08333024746293495]
Grounded-GMOT is an innovative tracking paradigm that enables users to track multiple generic objects in videos through natural language descriptors.
Our contributions begin with the introduction of the G2MOT dataset, which includes a collection of videos featuring a wide variety of generic objects.
Following this, we propose a novel tracking method, KAM-SORT, which not only effectively integrates visual appearance with motion cues but also enhances the Kalman filter.
arXiv Detail & Related papers (2024-10-11T20:38:17Z) - OCTrack: Benchmarking the Open-Corpus Multi-Object Tracking [63.53176412315835]
We study a novel yet practical problem of open-corpus multi-object tracking (OCMOT)
We build OCTrackB, a large-scale and comprehensive benchmark, to provide a standard evaluation platform for the OCMOT problem.
arXiv Detail & Related papers (2024-07-19T05:58:01Z) - Siamese-DETR for Generic Multi-Object Tracking [16.853363984562602]
Traditional Multi-Object Tracking (MOT) is limited to tracking objects belonging to the pre-defined closed-set categories.
Siamese-DETR is proposed to track objects beyond pre-defined categories with the given text prompt and template image.
Siamese-DETR surpasses existing MOT methods on GMOT-40 dataset by a large margin.
arXiv Detail & Related papers (2023-10-27T03:32:05Z) - Comparative study of multi-person tracking methods [0.0]
The purpose of this study is to discover the techniques used and to provide useful insights about these algorithms in the tracking pipeline.
We trained our own Pedestrian Detection model using the MOT17Det dataset.
We then present experimental results which shows that Tracktor++ is a better multi-person tracking algorithm than SORT.
arXiv Detail & Related papers (2023-10-07T14:29:57Z) - Z-GMOT: Zero-shot Generic Multiple Object Tracking [8.878331472995498]
Multi-Object Tracking (MOT) faces limitations such as reliance on prior knowledge and predefined categories.
To address these issues, Generic Multiple Object Tracking (GMOT) has emerged as an alternative approach.
We propose $mathttZ-GMOT$, a cutting-edge tracking solution capable of tracking objects from textitnever-seen categories without the need of initial bounding boxes or predefined categories.
arXiv Detail & Related papers (2023-05-28T06:44:33Z) - OmniTracker: Unifying Object Tracking by Tracking-with-Detection [119.51012668709502]
OmniTracker is presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline.
Experiments on 7 tracking datasets, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.
arXiv Detail & Related papers (2023-03-21T17:59:57Z) - Tracking Every Thing in the Wild [61.917043381836656]
We introduce a new metric, Track Every Thing Accuracy (TETA), breaking tracking measurement into three sub-factors: localization, association, and classification.
Our experiments show that TETA evaluates trackers more comprehensively, and TETer achieves significant improvements on the challenging large-scale datasets BDD100K and TAO.
arXiv Detail & Related papers (2022-07-26T15:37:19Z) - MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking [72.76685780516371]
We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT)
The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community.
We provide a categorization of state-of-the-art trackers and a broad error analysis.
arXiv Detail & Related papers (2020-10-15T06:52:16Z) - MOT20: A benchmark for multi object tracking in crowded scenes [73.92443841487503]
We present our MOT20benchmark, consisting of 8 new sequences depicting very crowded challenging scenes.
The benchmark was presented first at the 4thBMTT MOT Challenge Workshop at the Computer Vision and Pattern Recognition Conference (CVPR)
arXiv Detail & Related papers (2020-03-19T20:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.