AFAT: Adaptive Failure-Aware Tracker for Robust Visual Object Tracking
- URL: http://arxiv.org/abs/2005.13708v1
- Date: Wed, 27 May 2020 23:21:12 GMT
- Title: AFAT: Adaptive Failure-Aware Tracker for Robust Visual Object Tracking
- Authors: Tianyang Xu, Zhen-Hua Feng, Xiao-Jun Wu, Josef Kittler
- Abstract summary: Siamese approaches have achieved promising performance in visual object tracking recently.
Siamese paradigm uses one-shot learning to model the online tracking task, which impedes online adaptation in the tracking process.
We propose a failure-aware system, based on convolutional and LSTM modules in the decision stage, enabling online reporting of potential tracking failures.
- Score: 46.82222972389531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Siamese approaches have achieved promising performance in visual object
tracking recently. The key to the success of Siamese trackers is to learn
appearance-invariant feature embedding functions via pair-wise offline training
on large-scale video datasets. However, the Siamese paradigm uses one-shot
learning to model the online tracking task, which impedes online adaptation in
the tracking process. Additionally, the uncertainty of an online tracking
response is not measured, leading to the problem of ignoring potential
failures. In this paper, we advocate online adaptation in the tracking stage.
To this end, we propose a failure-aware system, realised by a Quality
Prediction Network (QPN), based on convolutional and LSTM modules in the
decision stage, enabling online reporting of potential tracking failures.
Specifically, sequential response maps from previous successive frames as well
as current frame are collected to predict the tracking confidence, realising
spatio-temporal fusion in the decision level. In addition, we further provide
an Adaptive Failure-Aware Tracker (AFAT) by combing the state-of-the-art
Siamese trackers with our system. The experimental results obtained on standard
benchmarking datasets demonstrate the effectiveness of the proposed
failure-aware system and the merits of our AFAT tracker, with outstanding and
balanced performance in both accuracy and speed.
Related papers
- Robust Visual Tracking via Iterative Gradient Descent and Threshold Selection [4.978166837959101]
We introduce a novel robust linear regression estimator, which achieves favorable performance when the error vector follows i.i.d Gaussian-Laplacian distribution.
In addition, we expend IGDTS to a generative tracker, and apply IGDTS-distance to measure the deviation between the sample and the model.
Experimental results on several challenging image sequences show that the proposed tracker outperformance existing trackers.
arXiv Detail & Related papers (2024-06-02T01:51:09Z) - RTracker: Recoverable Tracking via PN Tree Structured Memory [71.05904715104411]
We propose a recoverable tracking framework, RTracker, that uses a tree-structured memory to dynamically associate a tracker and a detector to enable self-recovery.
Specifically, we propose a Positive-Negative Tree-structured memory to chronologically store and maintain positive and negative target samples.
Our core idea is to use the support samples of positive and negative target categories to establish a relative distance-based criterion for a reliable assessment of target loss.
arXiv Detail & Related papers (2024-03-28T08:54:40Z) - Unsupervised Learning of Accurate Siamese Tracking [68.58171095173056]
We present a novel unsupervised tracking framework, in which we can learn temporal correspondence both on the classification branch and regression branch.
Our tracker outperforms preceding unsupervised methods by a substantial margin, performing on par with supervised methods on large-scale datasets such as TrackingNet and LaSOT.
arXiv Detail & Related papers (2022-04-04T13:39:43Z) - Dynamic Template Selection Through Change Detection for Adaptive Siamese
Tracking [7.662745552551165]
Single object tracking (SOT) remains a challenging task in real-world application due to changes and deformations in a target object's appearance.
We propose a new method for dynamic sample selection and memory replay, preventing template corruption.
Our proposed method can be integrated into any object tracking algorithm that leverages online learning for model adaptation.
arXiv Detail & Related papers (2022-03-07T07:27:02Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - RSINet: Rotation-Scale Invariant Network for Online Visual Tracking [7.186849714896344]
Most network-based trackers perform the tracking process without model update, and cannot learn targetspecific variation adaptively.
In this paper, we propose a novel Rotation-Scale Invariant Network (RSINet) to address the above problem.
Our RSINet tracker consists of a target-distractor discrimination branch and a rotation-scale estimation branch, the rotation and scale knowledge can be explicitly learned by a multi-task learning method in an end-to-end manner.
In addtion, the tracking model is adaptively optimized and updated undertemporal energy control, which ensures model stability and reliability, as well as high tracking
arXiv Detail & Related papers (2020-11-18T08:19:14Z) - Self-supervised Object Tracking with Cycle-consistent Siamese Networks [55.040249900677225]
We exploit an end-to-end Siamese network in a cycle-consistent self-supervised framework for object tracking.
We propose to integrate a Siamese region proposal and mask regression network in our tracking framework so that a fast and more accurate tracker can be learned without the annotation of each frame.
arXiv Detail & Related papers (2020-08-03T04:10:38Z) - Unsupervised Deep Representation Learning for Real-Time Tracking [137.69689503237893]
We propose an unsupervised learning method for visual tracking.
The motivation of our unsupervised learning is that a robust tracker should be effective in bidirectional tracking.
We build our framework on a Siamese correlation filter network, and propose a multi-frame validation scheme and a cost-sensitive loss to facilitate unsupervised learning.
arXiv Detail & Related papers (2020-07-22T08:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.