MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking
- URL: http://arxiv.org/abs/2010.07548v2
- Date: Tue, 8 Dec 2020 09:10:53 GMT
- Title: MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking
- Authors: Patrick Dendorfer and Aljo\v{s}a O\v{s}ep and Anton Milan and Konrad
Schindler and Daniel Cremers and Ian Reid and Stefan Roth and Laura
Leal-Taix\'e
- Abstract summary: We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT)
The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community.
We provide a categorization of state-of-the-art trackers and a broad error analysis.
- Score: 72.76685780516371
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standardized benchmarks have been crucial in pushing the performance of
computer vision algorithms, especially since the advent of deep learning.
Although leaderboards should not be over-claimed, they often provide the most
objective measure of performance and are therefore important guides for
research. We present MOTChallenge, a benchmark for single-camera Multiple
Object Tracking (MOT) launched in late 2014, to collect existing and new data,
and create a framework for the standardized evaluation of multiple object
tracking methods. The benchmark is focused on multiple people tracking, since
pedestrians are by far the most studied object in the tracking community, with
applications ranging from robot navigation to self-driving cars. This paper
collects the first three releases of the benchmark: (i) MOT15, along with
numerous state-of-the-art results that were submitted in the last years, (ii)
MOT16, which contains new challenging videos, and (iii) MOT17, that extends
MOT16 sequences with more precise labels and evaluates tracking performance on
three different object detectors. The second and third release not only offers
a significant increase in the number of labeled boxes but also provide labels
for multiple object classes beside pedestrians, as well as the level of
visibility for every single object of interest. We finally provide a
categorization of state-of-the-art trackers and a broad error analysis. This
will help newcomers understand the related work and research trends in the MOT
community, and hopefully shed some light on potential future research
directions.
Related papers
- Tracking Reflected Objects: A Benchmark [12.770787846444406]
We introduce TRO, a benchmark specifically for Tracking Reflected Objects.
TRO includes 200 sequences with around 70,000 frames, each carefully annotated with bounding boxes.
To provide a stronger baseline, we propose a new tracker, HiP-HaTrack, which uses hierarchical features to improve performance.
arXiv Detail & Related papers (2024-07-07T02:22:45Z) - OVTrack: Open-Vocabulary Multiple Object Tracking [64.73379741435255]
OVTrack is an open-vocabulary tracker capable of tracking arbitrary object classes.
It sets a new state-of-the-art on the large-scale, large-vocabulary TAO benchmark.
arXiv Detail & Related papers (2023-04-17T16:20:05Z) - Beyond SOT: Tracking Multiple Generic Objects at Once [141.36900362724975]
Generic Object Tracking (GOT) is the problem of tracking target objects, specified by bounding boxes in the first frame of a video.
We introduce a new large-scale GOT benchmark, LaGOT, containing multiple annotated target objects per sequence.
Our approach achieves highly competitive results on single-object GOT datasets, setting a new state of the art on TrackingNet with a success rate AUC of 84.4%.
arXiv Detail & Related papers (2022-12-22T17:59:19Z) - AttTrack: Online Deep Attention Transfer for Multi-object Tracking [4.5116674432168615]
Multi-object tracking (MOT) is a vital component of intelligent video analytics applications such as surveillance and autonomous driving.
In this paper, we aim to accelerate MOT by transferring the knowledge from high-level features of a complex network (teacher) to a lightweight network (student) at both training and inference times.
The proposed AttTrack framework has three key components: 1) cross-model feature learning to align intermediate representations from the teacher and student models, 2) interleaving the execution of the two models at inference time, and 3) incorporating the updated predictions from the teacher model as prior knowledge to assist the student model
arXiv Detail & Related papers (2022-10-16T22:15:31Z) - Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous
Driving [22.693895321632507]
We propose a probabilistic, multi-modal, multi-object tracking system consisting of different trainable modules.
We show that our method outperforms current state-of-the-art on the NuScenes Tracking dataset.
arXiv Detail & Related papers (2020-12-26T15:00:54Z) - Probabilistic Tracklet Scoring and Inpainting for Multiple Object
Tracking [83.75789829291475]
We introduce a probabilistic autoregressive motion model to score tracklet proposals.
This is achieved by training our model to learn the underlying distribution of natural tracklets.
Our experiments demonstrate the superiority of our approach at tracking objects in challenging sequences.
arXiv Detail & Related papers (2020-12-03T23:59:27Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z) - MOT20: A benchmark for multi object tracking in crowded scenes [73.92443841487503]
We present our MOT20benchmark, consisting of 8 new sequences depicting very crowded challenging scenes.
The benchmark was presented first at the 4thBMTT MOT Challenge Workshop at the Computer Vision and Pattern Recognition Conference (CVPR)
arXiv Detail & Related papers (2020-03-19T20:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.