An Informative Tracking Benchmark
- URL: http://arxiv.org/abs/2112.06467v1
- Date: Mon, 13 Dec 2021 07:56:16 GMT
- Title: An Informative Tracking Benchmark
- Authors: Xin Li and Qiao Liu and Wenjie Pei and Qiuhong Shen and Yaowei Wang
and Huchuan Lu and Ming-Hsuan Yang
- Abstract summary: We develop a small and informative tracking benchmark (ITB) with 7% out of 1.2 M frames of existing and newly collected datasets.
We select the most informative sequences from existing benchmarks taking into account 1) challenging level, 2) discriminative strength, 3) and density of appearance variations.
By analyzing the results of 15 state-of-the-art trackers re-trained on the same data, we determine the effective methods for robust tracking under each scenario.
- Score: 133.0931262969931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Along with the rapid progress of visual tracking, existing benchmarks become
less informative due to redundancy of samples and weak discrimination between
current trackers, making evaluations on all datasets extremely time-consuming.
Thus, a small and informative benchmark, which covers all typical challenging
scenarios to facilitate assessing the tracker performance, is of great
interest. In this work, we develop a principled way to construct a small and
informative tracking benchmark (ITB) with 7% out of 1.2 M frames of existing
and newly collected datasets, which enables efficient evaluation while ensuring
effectiveness. Specifically, we first design a quality assessment mechanism to
select the most informative sequences from existing benchmarks taking into
account 1) challenging level, 2) discriminative strength, 3) and density of
appearance variations. Furthermore, we collect additional sequences to ensure
the diversity and balance of tracking scenarios, leading to a total of 20
sequences for each scenario. By analyzing the results of 15 state-of-the-art
trackers re-trained on the same data, we determine the effective methods for
robust tracking under each scenario and demonstrate new challenges for future
research direction in this field.
Related papers
- SmurfCat at SemEval-2024 Task 6: Leveraging Synthetic Data for Hallucination Detection [51.99159169107426]
We present our novel systems developed for the SemEval-2024 hallucination detection task.
Our investigation spans a range of strategies to compare model predictions with reference standards.
We introduce three distinct methods that exhibit strong performance metrics.
arXiv Detail & Related papers (2024-04-09T09:03:44Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - LocoMuJoCo: A Comprehensive Imitation Learning Benchmark for Locomotion [20.545058017790428]
Imitation Learning holds great promise for enabling agile locomotion in embodied agents.
We present a novel benchmark designed to facilitate rigorous evaluation and comparison of IL algorithms.
This benchmark encompasses a diverse set of environments, including quadrupeds, bipeds, and musculoskeletal human models.
arXiv Detail & Related papers (2023-11-04T19:41:50Z) - TrackFlow: Multi-Object Tracking with Normalizing Flows [36.86830078167583]
We aim at extending tracking-by-detection to multi-modal settings.
A rough estimate of 3D information is also available and must be merged with other traditional metrics.
Our approach consistently enhances the performance of several tracking-by-detection algorithms.
arXiv Detail & Related papers (2023-08-22T15:40:03Z) - Self-Supervised Representation Learning from Temporal Ordering of
Automated Driving Sequences [49.91741677556553]
We propose TempO, a temporal ordering pretext task for pre-training region-level feature representations for perception tasks.
We embed each frame by an unordered set of proposal feature vectors, a representation that is natural for object detection or tracking systems.
Extensive evaluations on the BDD100K, nuImages, and MOT17 datasets show that our TempO pre-training approach outperforms single-frame self-supervised learning methods.
arXiv Detail & Related papers (2023-02-17T18:18:27Z) - Adaptive Siamese Tracking with a Compact Latent Network [219.38172719948048]
We present an intuitive viewing to simplify the Siamese-based trackers by converting the tracking task to a classification.
Under this viewing, we perform an in-depth analysis for them through visual simulations and real tracking examples.
We apply it to adjust three classical Siamese-based trackers, namely SiamRPN++, SiamFC, and SiamBAN.
arXiv Detail & Related papers (2023-02-02T08:06:02Z) - One Class One Click: Quasi Scene-level Weakly Supervised Point Cloud
Semantic Segmentation with Active Learning [29.493759008637532]
We introduce One Class One Click (OCOC), a low cost yet informative quasi scene-level label, which encapsulates point-level and scene-level annotations.
An active weakly supervised framework is proposed to leverage scarce labels by involving weak supervision from global and local perspectives.
It considerably outperforms genuine scene-level weakly supervised methods by up to 25% in terms of average F1 score.
arXiv Detail & Related papers (2022-11-23T01:23:26Z) - Towards Sequence-Level Training for Visual Tracking [60.95799261482857]
This work introduces a sequence-level training strategy for visual tracking based on reinforcement learning.
Four representative tracking models, SiamRPN++, SiamAttn, TransT, and TrDiMP, consistently improve by incorporating the proposed methods in training.
arXiv Detail & Related papers (2022-08-11T13:15:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.