MOT20: A benchmark for multi object tracking in crowded scenes
- URL: http://arxiv.org/abs/2003.09003v1
- Date: Thu, 19 Mar 2020 20:08:24 GMT
- Title: MOT20: A benchmark for multi object tracking in crowded scenes
- Authors: Patrick Dendorfer, Hamid Rezatofighi, Anton Milan, Javen Shi, Daniel
Cremers, Ian Reid, Stefan Roth, Konrad Schindler, and Laura Leal-Taix\'e
- Abstract summary: We present our MOT20benchmark, consisting of 8 new sequences depicting very crowded challenging scenes.
The benchmark was presented first at the 4thBMTT MOT Challenge Workshop at the Computer Vision and Pattern Recognition Conference (CVPR)
- Score: 73.92443841487503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standardized benchmarks are crucial for the majority of computer vision
applications. Although leaderboards and ranking tables should not be
over-claimed, benchmarks often provide the most objective measure of
performance and are therefore important guides for research. The benchmark for
Multiple Object Tracking, MOTChallenge, was launched with the goal to establish
a standardized evaluation of multiple object tracking methods. The challenge
focuses on multiple people tracking, since pedestrians are well studied in the
tracking community, and precise tracking and detection has high practical
relevance. Since the first release, MOT15, MOT16, and MOT17 have tremendously
contributed to the community by introducing a clean dataset and precise
framework to benchmark multi-object trackers. In this paper, we present our
MOT20benchmark, consisting of 8 new sequences depicting very crowded
challenging scenes. The benchmark was presented first at the 4thBMTT MOT
Challenge Workshop at the Computer Vision and Pattern Recognition Conference
(CVPR) 2019, and gives to chance to evaluate state-of-the-art methods for
multiple object tracking when handling extremely crowded scenarios.
Related papers
- Tracking Reflected Objects: A Benchmark [12.770787846444406]
We introduce TRO, a benchmark specifically for Tracking Reflected Objects.
TRO includes 200 sequences with around 70,000 frames, each carefully annotated with bounding boxes.
To provide a stronger baseline, we propose a new tracker, HiP-HaTrack, which uses hierarchical features to improve performance.
arXiv Detail & Related papers (2024-07-07T02:22:45Z) - TopTrack: Tracking Objects By Their Top [13.020122353444497]
TopTrack is a joint detection-and-tracking method that uses the top of the object as a keypoint for detection instead of the center.
We performed experiments to show that using the object top as a keypoint for detection can reduce the amount of missed detections.
arXiv Detail & Related papers (2023-04-12T19:00:12Z) - Beyond SOT: Tracking Multiple Generic Objects at Once [141.36900362724975]
Generic Object Tracking (GOT) is the problem of tracking target objects, specified by bounding boxes in the first frame of a video.
We introduce a new large-scale GOT benchmark, LaGOT, containing multiple annotated target objects per sequence.
Our approach achieves highly competitive results on single-object GOT datasets, setting a new state of the art on TrackingNet with a success rate AUC of 84.4%.
arXiv Detail & Related papers (2022-12-22T17:59:19Z) - End-to-end Tracking with a Multi-query Transformer [96.13468602635082]
Multiple-object tracking (MOT) is a challenging task that requires simultaneous reasoning about location, appearance, and identity of the objects in the scene over time.
Our aim in this paper is to move beyond tracking-by-detection approaches, to class-agnostic tracking that performs well also for unknown object classes.
arXiv Detail & Related papers (2022-10-26T10:19:37Z) - Simple Cues Lead to a Strong Multi-Object Tracker [3.7189423451031356]
We propose a new type of tracking-by-detection (TbD) for Multi-Object Tracking.
We show that a combination of our appearance features with a simple motion model leads to strong tracking results.
Our tracker generalizes to four public datasets, namely MOT17, MOT20, BDD100k, and DanceTrack, achieving state-of-the-art performance.
arXiv Detail & Related papers (2022-06-09T17:55:51Z) - Probabilistic Tracklet Scoring and Inpainting for Multiple Object
Tracking [83.75789829291475]
We introduce a probabilistic autoregressive motion model to score tracklet proposals.
This is achieved by training our model to learn the underlying distribution of natural tracklets.
Our experiments demonstrate the superiority of our approach at tracking objects in challenging sequences.
arXiv Detail & Related papers (2020-12-03T23:59:27Z) - MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking [72.76685780516371]
We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT)
The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community.
We provide a categorization of state-of-the-art trackers and a broad error analysis.
arXiv Detail & Related papers (2020-10-15T06:52:16Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.