Revisiting the details when evaluating a visual tracker
- URL: http://arxiv.org/abs/2102.06733v1
- Date: Mon, 25 Jan 2021 13:43:27 GMT
- Title: Revisiting the details when evaluating a visual tracker
- Authors: Zan Huang
- Abstract summary: This report focuses on single object tracking and revisits the details of tracker evaluation based on widely used OTBciteotb benchmark.
Experimental results suggest that there may not be an absolute winner among tracking algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Visual tracking algorithms are naturally adopted in various applications,
there have been several benchmarks and many tracking algorithms, more expected
to appear in the future. In this report, I focus on single object tracking and
revisit the details of tracker evaluation based on widely used OTB\cite{otb}
benchmark by introducing a simpler, accurate, and extensible method for tracker
evaluation and comparison. Experimental results suggest that there may not be
an absolute winner among tracking algorithms. We have to perform detailed
analysis to select suitable trackers for use cases.
Related papers
- RTracker: Recoverable Tracking via PN Tree Structured Memory [71.05904715104411]
We propose a recoverable tracking framework, RTracker, that uses a tree-structured memory to dynamically associate a tracker and a detector to enable self-recovery.
Specifically, we propose a Positive-Negative Tree-structured memory to chronologically store and maintain positive and negative target samples.
Our core idea is to use the support samples of positive and negative target categories to establish a relative distance-based criterion for a reliable assessment of target loss.
arXiv Detail & Related papers (2024-03-28T08:54:40Z) - Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - OmniTracker: Unifying Object Tracking by Tracking-with-Detection [119.51012668709502]
OmniTracker is presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline.
Experiments on 7 tracking datasets, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.
arXiv Detail & Related papers (2023-03-21T17:59:57Z) - Detection-aware multi-object tracking evaluation [1.7880586070278561]
We propose a novel performance measure, named Tracking Effort Measure (TEM), to evaluate trackers that use different detectors.
TEM can quantify the effort done by the tracker with a reduced correlation on the input detections.
arXiv Detail & Related papers (2022-12-16T15:35:34Z) - Beyond Greedy Search: Tracking by Multi-Agent Reinforcement
Learning-based Beam Search [103.53249725360286]
Existing trackers usually select a location or proposal with the maximum score as tracking result for each frame.
We propose a novel multi-agent reinforcement learning based beam search strategy (termed BeamTracking) to address this issue.
arXiv Detail & Related papers (2022-05-19T16:35:36Z) - CoCoLoT: Combining Complementary Trackers in Long-Term Visual Tracking [17.2557973738397]
We propose a framework, named CoCoLoT, that combines the characteristics of complementary visual trackers to achieve enhanced long-term tracking performance.
CoCoLoT perceives whether the trackers are following the target object through an online learned deep verification model, and accordingly activates a decision policy.
The proposed methodology is evaluated extensively and the comparison with several other solutions reveals that it competes favourably with the state-of-the-art on the most popular long-term visual tracking benchmarks.
arXiv Detail & Related papers (2022-05-09T13:25:13Z) - Visual Object Tracking with Discriminative Filters and Siamese Networks:
A Survey and Outlook [97.27199633649991]
Discriminative Correlation Filters (DCFs) and deep Siamese Networks (SNs) have emerged as dominating tracking paradigms.
This survey presents a systematic and thorough review of more than 90 DCFs and Siamese trackers, based on results in nine tracking benchmarks.
arXiv Detail & Related papers (2021-12-06T07:57:10Z) - Predictive Visual Tracking: A New Benchmark and Baseline Approach [27.87099869398515]
In the real-world scenarios, the onboard processing time of the image streams inevitably leads to a discrepancy between the tracking results and the real-world states.
Existing visual tracking benchmarks commonly run the trackers offline and ignore such latency in the evaluation.
In this work, we aim to deal with a more realistic problem of latency-aware tracking.
arXiv Detail & Related papers (2021-03-08T01:50:05Z) - Multi-modal Visual Tracking: Review and Experimental Comparison [85.20414397784937]
We summarize the multi-modal tracking algorithms, especially visible-depth (RGB-D) tracking and visible-thermal (RGB-T) tracking.
We conduct experiments to analyze the effectiveness of trackers on five datasets.
arXiv Detail & Related papers (2020-12-08T02:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.