Tracking Every Thing in the Wild
- URL: http://arxiv.org/abs/2207.12978v1
- Date: Tue, 26 Jul 2022 15:37:19 GMT
- Title: Tracking Every Thing in the Wild
- Authors: Siyuan Li, Martin Danelljan, Henghui Ding, Thomas E. Huang, Fisher Yu
- Abstract summary: We introduce a new metric, Track Every Thing Accuracy (TETA), breaking tracking measurement into three sub-factors: localization, association, and classification.
Our experiments show that TETA evaluates trackers more comprehensively, and TETer achieves significant improvements on the challenging large-scale datasets BDD100K and TAO.
- Score: 61.917043381836656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current multi-category Multiple Object Tracking (MOT) metrics use class
labels to group tracking results for per-class evaluation. Similarly, MOT
methods typically only associate objects with the same class predictions. These
two prevalent strategies in MOT implicitly assume that the classification
performance is near-perfect. However, this is far from the case in recent
large-scale MOT datasets, which contain large numbers of classes with many rare
or semantically similar categories. Therefore, the resulting inaccurate
classification leads to sub-optimal tracking and inadequate benchmarking of
trackers. We address these issues by disentangling classification from
tracking. We introduce a new metric, Track Every Thing Accuracy (TETA),
breaking tracking measurement into three sub-factors: localization,
association, and classification, allowing comprehensive benchmarking of
tracking performance even under inaccurate classification. TETA also deals with
the challenging incomplete annotation problem in large-scale tracking datasets.
We further introduce a Track Every Thing tracker (TETer), that performs
association using Class Exemplar Matching (CEM). Our experiments show that TETA
evaluates trackers more comprehensively, and TETer achieves significant
improvements on the challenging large-scale datasets BDD100K and TAO compared
to the state-of-the-art.
Related papers
- OCTrack: Benchmarking the Open-Corpus Multi-Object Tracking [63.53176412315835]
We study a novel yet practical problem of open-corpus multi-object tracking (OCMOT)
We build OCTrackB, a large-scale and comprehensive benchmark, to provide a standard evaluation platform for the OCMOT problem.
arXiv Detail & Related papers (2024-07-19T05:58:01Z) - Bridging the Gap Between End-to-end and Non-End-to-end Multi-Object
Tracking [27.74953961900086]
Existing end-to-end Multi-Object Tracking (e2e-MOT) methods have not surpassed non-end-to-end tracking-by-detection methods.
We present Co-MOT, a simple and effective method to facilitate e2e-MOT by a novel coopetition label assignment with a shadow concept.
arXiv Detail & Related papers (2023-05-22T05:18:34Z) - End-to-end Tracking with a Multi-query Transformer [96.13468602635082]
Multiple-object tracking (MOT) is a challenging task that requires simultaneous reasoning about location, appearance, and identity of the objects in the scene over time.
Our aim in this paper is to move beyond tracking-by-detection approaches, to class-agnostic tracking that performs well also for unknown object classes.
arXiv Detail & Related papers (2022-10-26T10:19:37Z) - QDTrack: Quasi-Dense Similarity Learning for Appearance-Only Multiple
Object Tracking [73.52284039530261]
We present Quasi-Dense Similarity Learning, which densely samples hundreds of object regions on a pair of images for contrastive learning.
We find that the resulting distinctive feature space admits a simple nearest neighbor search at inference time for object association.
We show that our similarity learning scheme is not limited to video data, but can learn effective instance similarity even from static input.
arXiv Detail & Related papers (2022-10-12T15:47:36Z) - mvHOTA: A multi-view higher order tracking accuracy metric to measure
spatial and temporal associations in multi-point detection [1.039718070553655]
Multi-object tracking (MOT) is a challenging task that involves detecting objects in the scene and tracking them across a sequence of frames.
The main evaluation metric to benchmark MOT methods on datasets such as KITTI has recently become the higher order tracking accuracy (HOTA) metric.
We propose a multi-view higher order tracking metric (mvHOTA) to determine the accuracy of multi-point (multi-instance and multi-class) detection.
arXiv Detail & Related papers (2022-06-19T10:31:53Z) - TDT: Teaching Detectors to Track without Fully Annotated Videos [2.8292841621378844]
One-stage trackers that predict both detections and appearance embeddings in one forward pass received much attention.
Our proposed one-stage solution matches the two-stage counterpart in quality but is 3 times faster.
arXiv Detail & Related papers (2022-05-11T15:56:17Z) - Unified Transformer Tracker for Object Tracking [58.65901124158068]
We present the Unified Transformer Tracker (UTT) to address tracking problems in different scenarios with one paradigm.
A track transformer is developed in our UTT to track the target in both Single Object Tracking (SOT) and Multiple Object Tracking (MOT)
arXiv Detail & Related papers (2022-03-29T01:38:49Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.