BackTrack: Robust template update via Backward Tracking of candidate
template
- URL: http://arxiv.org/abs/2308.10604v1
- Date: Mon, 21 Aug 2023 10:00:59 GMT
- Title: BackTrack: Robust template update via Backward Tracking of candidate
template
- Authors: Dongwook Lee, Wonjun Choi, Seohyung Lee, ByungIn Yoo, Eunho Yang,
Seongju Hwang
- Abstract summary: BackTrack is a generic template update scheme and is applicable to any template-based trackers.
BackTrack is a robust and reliable method to quantify the confidence of the candidate template by backward tracking it on the past frames.
- Score: 30.38433988212259
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variations of target appearance such as deformations, illumination variance,
occlusion, etc., are the major challenges of visual object tracking that
negatively impact the performance of a tracker. An effective method to tackle
these challenges is template update, which updates the template to reflect the
change of appearance in the target object during tracking. However, with
template updates, inadequate quality of new templates or inappropriate timing
of updates may induce a model drift problem, which severely degrades the
tracking performance. Here, we propose BackTrack, a robust and reliable method
to quantify the confidence of the candidate template by backward tracking it on
the past frames. Based on the confidence score of candidates from BackTrack, we
can update the template with a reliable candidate at the right time while
rejecting unreliable candidates. BackTrack is a generic template update scheme
and is applicable to any template-based trackers. Extensive experiments on
various tracking benchmarks verify the effectiveness of BackTrack over existing
template update algorithms, as it achieves SOTA performance on various tracking
benchmarks.
Related papers
- RTracker: Recoverable Tracking via PN Tree Structured Memory [71.05904715104411]
We propose a recoverable tracking framework, RTracker, that uses a tree-structured memory to dynamically associate a tracker and a detector to enable self-recovery.
Specifically, we propose a Positive-Negative Tree-structured memory to chronologically store and maintain positive and negative target samples.
Our core idea is to use the support samples of positive and negative target categories to establish a relative distance-based criterion for a reliable assessment of target loss.
arXiv Detail & Related papers (2024-03-28T08:54:40Z) - Autoregressive Queries for Adaptive Tracking with Spatio-TemporalTransformers [55.46413719810273]
rich-temporal information is crucial to the complicated target appearance in visual tracking.
Our method improves the tracker's performance on six popular tracking benchmarks.
arXiv Detail & Related papers (2024-03-15T02:39:26Z) - ACTrack: Adding Spatio-Temporal Condition for Visual Object Tracking [0.5371337604556311]
Efficiently modeling-temporal relations of objects is a key challenge in visual object tracking (VOT)
Existing methods track by appearance-based similarity or long-term relation modeling, resulting in rich temporal contexts between consecutive frames being easily overlooked.
In this paper we present ACTrack, a new framework with additive pre-temporal tracking framework with large memory conditions. It preserves the quality and capabilities of the pre-trained backbone by freezing its parameters, and makes a trainable lightweight additive net to model temporal relations in tracking.
We design an additive siamese convolutional network to ensure the integrity of spatial features and temporal sequence
arXiv Detail & Related papers (2024-02-27T07:34:08Z) - Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - BACTrack: Building Appearance Collection for Aerial Tracking [13.785254511683966]
Building Appearance Collection Tracking builds a dynamic collection of target templates online and performs efficient multi-template matching to achieve robust tracking.
BACTrack achieves top performance on four challenging aerial tracking benchmarks while maintaining an impressive speed of over 87 FPS on a single GPU.
arXiv Detail & Related papers (2023-12-11T05:55:59Z) - Context-aware Visual Tracking with Joint Meta-updating [11.226947525556813]
We propose a context-aware tracking model to optimize the tracker over the representation space, which jointly meta-update both branches by exploiting information along the whole sequence.
The proposed tracking method achieves an EAO score of 0.514 on VOT2018 with the speed of 40FPS, demonstrating its capability of improving the accuracy and robustness of the underlying tracker with little speed drop.
arXiv Detail & Related papers (2022-04-04T14:16:00Z) - Unsupervised Learning of Accurate Siamese Tracking [68.58171095173056]
We present a novel unsupervised tracking framework, in which we can learn temporal correspondence both on the classification branch and regression branch.
Our tracker outperforms preceding unsupervised methods by a substantial margin, performing on par with supervised methods on large-scale datasets such as TrackingNet and LaSOT.
arXiv Detail & Related papers (2022-04-04T13:39:43Z) - Generative Target Update for Adaptive Siamese Tracking [7.662745552551165]
Siamese trackers perform similarity matching with templates (i.e., target models) to localize objects within a search region.
Several strategies have been proposed in the literature to update a template based on the tracker output, typically extracted from the target search region in the current frame.
This paper proposes a model adaptation method for Siamese trackers that uses a generative model to produce a synthetic template from the object search regions of several previous frames.
arXiv Detail & Related papers (2022-02-21T00:22:49Z) - Learning Dynamic Compact Memory Embedding for Deformable Visual Object
Tracking [82.34356879078955]
We propose a compact memory embedding to enhance the discrimination of the segmentation-based deformable visual tracking method.
Our method outperforms the excellent segmentation-based trackers, i.e., D3S and SiamMask on DAVIS 2017 benchmark.
arXiv Detail & Related papers (2021-11-23T03:07:12Z) - AFAT: Adaptive Failure-Aware Tracker for Robust Visual Object Tracking [46.82222972389531]
Siamese approaches have achieved promising performance in visual object tracking recently.
Siamese paradigm uses one-shot learning to model the online tracking task, which impedes online adaptation in the tracking process.
We propose a failure-aware system, based on convolutional and LSTM modules in the decision stage, enabling online reporting of potential tracking failures.
arXiv Detail & Related papers (2020-05-27T23:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.