Context-aware Visual Tracking with Joint Meta-updating
- URL: http://arxiv.org/abs/2204.01513v1
- Date: Mon, 4 Apr 2022 14:16:00 GMT
- Title: Context-aware Visual Tracking with Joint Meta-updating
- Authors: Qiuhong Shen, Xin Li, Fanyang Meng, Yongsheng Liang
- Abstract summary: We propose a context-aware tracking model to optimize the tracker over the representation space, which jointly meta-update both branches by exploiting information along the whole sequence.
The proposed tracking method achieves an EAO score of 0.514 on VOT2018 with the speed of 40FPS, demonstrating its capability of improving the accuracy and robustness of the underlying tracker with little speed drop.
- Score: 11.226947525556813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual object tracking acts as a pivotal component in various emerging video
applications. Despite the numerous developments in visual tracking, existing
deep trackers are still likely to fail when tracking against objects with
dramatic variation. These deep trackers usually do not perform online update or
update single sub-branch of the tracking model, for which they cannot adapt to
the appearance variation of objects. Efficient updating methods are therefore
crucial for tracking while previous meta-updater optimizes trackers directly
over parameter space, which is prone to over-fit even collapse on longer
sequences. To address these issues, we propose a context-aware tracking model
to optimize the tracker over the representation space, which jointly
meta-update both branches by exploiting information along the whole sequence,
such that it can avoid the over-fitting problem. First, we note that the
embedded features of the localization branch and the box-estimation branch,
focusing on the local and global information of the target, are effective
complements to each other. Based on this insight, we devise a
context-aggregation module to fuse information in historical frames, followed
by a context-aware module to learn affinity vectors for both branches of the
tracker. Besides, we develop a dedicated meta-learning scheme, on account of
fast and stable updating with limited training samples. The proposed tracking
method achieves an EAO score of 0.514 on VOT2018 with the speed of 40FPS,
demonstrating its capability of improving the accuracy and robustness of the
underlying tracker with little speed drop.
Related papers
- Autoregressive Queries for Adaptive Tracking with Spatio-TemporalTransformers [55.46413719810273]
rich-temporal information is crucial to the complicated target appearance in visual tracking.
Our method improves the tracker's performance on six popular tracking benchmarks.
arXiv Detail & Related papers (2024-03-15T02:39:26Z) - Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - Target-Aware Tracking with Long-term Context Attention [8.20858704675519]
Long-term context attention (LCA) module can perform extensive information fusion on the target and its context from long-term frames.
LCA uses the target state from the previous frame to exclude the interference of similar objects and complex backgrounds.
Our tracker achieves state-of-the-art performance on multiple benchmarks, with 71.1% AUC, 89.3% NP, and 73.0% AO on LaSOT, TrackingNet, and GOT-10k.
arXiv Detail & Related papers (2023-02-27T14:40:58Z) - Tracking by Associating Clips [110.08925274049409]
In this paper, we investigate an alternative by treating object association as clip-wise matching.
Our new perspective views a single long video sequence as multiple short clips, and then the tracking is performed both within and between the clips.
The benefits of this new approach are two folds. First, our method is robust to tracking error accumulation or propagation, as the video chunking allows bypassing the interrupted frames.
Second, the multiple frame information is aggregated during the clip-wise matching, resulting in a more accurate long-range track association than the current frame-wise matching.
arXiv Detail & Related papers (2022-12-20T10:33:17Z) - Learning Dynamic Compact Memory Embedding for Deformable Visual Object
Tracking [82.34356879078955]
We propose a compact memory embedding to enhance the discrimination of the segmentation-based deformable visual tracking method.
Our method outperforms the excellent segmentation-based trackers, i.e., D3S and SiamMask on DAVIS 2017 benchmark.
arXiv Detail & Related papers (2021-11-23T03:07:12Z) - STMTrack: Template-free Visual Tracking with Space-time Memory Networks [42.06375415765325]
Existing trackers with template updating mechanisms rely on time-consuming numerical optimization and complex hand-designed strategies to achieve competitive performance.
We propose a novel tracking framework built on top of a space-time memory network that is competent to make full use of historical information related to the target.
Specifically, a novel memory mechanism is introduced, which stores the historical information of the target to guide the tracker to focus on the most informative regions in the current frame.
arXiv Detail & Related papers (2021-04-01T08:10:56Z) - Learning to Track with Object Permanence [61.36492084090744]
We introduce an end-to-end trainable approach for joint object detection and tracking.
Our model, trained jointly on synthetic and real data, outperforms the state of the art on KITTI, and MOT17 datasets.
arXiv Detail & Related papers (2021-03-26T04:43:04Z) - High-Performance Long-Term Tracking with Meta-Updater [75.80564183653274]
Long-term visual tracking has drawn increasing attention because it is much closer to practical applications than short-term tracking.
Most top-ranked long-term trackers adopt the offline-trained Siamese architectures, thus, they cannot benefit from great progress of short-term trackers with online update.
We propose a novel offline-trained Meta-Updater to address an important but unsolved problem: Is the tracker ready for updating in the current frame?
arXiv Detail & Related papers (2020-04-01T09:29:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.