STMTrack: Template-free Visual Tracking with Space-time Memory Networks
- URL: http://arxiv.org/abs/2104.00324v2
- Date: Fri, 2 Apr 2021 09:02:30 GMT
- Title: STMTrack: Template-free Visual Tracking with Space-time Memory Networks
- Authors: Zhihong Fu, Qingjie Liu, Zehua Fu, Yunhong Wang
- Abstract summary: Existing trackers with template updating mechanisms rely on time-consuming numerical optimization and complex hand-designed strategies to achieve competitive performance.
We propose a novel tracking framework built on top of a space-time memory network that is competent to make full use of historical information related to the target.
Specifically, a novel memory mechanism is introduced, which stores the historical information of the target to guide the tracker to focus on the most informative regions in the current frame.
- Score: 42.06375415765325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Boosting performance of the offline trained siamese trackers is getting
harder nowadays since the fixed information of the template cropped from the
first frame has been almost thoroughly mined, but they are poorly capable of
resisting target appearance changes. Existing trackers with template updating
mechanisms rely on time-consuming numerical optimization and complex
hand-designed strategies to achieve competitive performance, hindering them
from real-time tracking and practical applications. In this paper, we propose a
novel tracking framework built on top of a space-time memory network that is
competent to make full use of historical information related to the target for
better adapting to appearance variations during tracking. Specifically, a novel
memory mechanism is introduced, which stores the historical information of the
target to guide the tracker to focus on the most informative regions in the
current frame. Furthermore, the pixel-level similarity computation of the
memory network enables our tracker to generate much more accurate bounding
boxes of the target. Extensive experiments and comparisons with many
competitive trackers on challenging large-scale benchmarks, OTB-2015,
TrackingNet, GOT-10k, LaSOT, UAV123, and VOT2018, show that, without bells and
whistles, our tracker outperforms all previous state-of-the-art real-time
methods while running at 37 FPS. The code is available at
https://github.com/fzh0917/STMTrack.
Related papers
- Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - HIPTrack: Visual Tracking with Historical Prompts [37.85656595341516]
We show that by providing a tracker that follows Siamese paradigm with precise and updated historical information, a significant performance improvement can be achieved.
We build a novel tracker called HIPTrack based on the historical prompt network, which achieves considerable performance improvements without the need to retrain the entire model.
arXiv Detail & Related papers (2023-11-03T17:54:59Z) - Target-Aware Tracking with Long-term Context Attention [8.20858704675519]
Long-term context attention (LCA) module can perform extensive information fusion on the target and its context from long-term frames.
LCA uses the target state from the previous frame to exclude the interference of similar objects and complex backgrounds.
Our tracker achieves state-of-the-art performance on multiple benchmarks, with 71.1% AUC, 89.3% NP, and 73.0% AO on LaSOT, TrackingNet, and GOT-10k.
arXiv Detail & Related papers (2023-02-27T14:40:58Z) - Context-aware Visual Tracking with Joint Meta-updating [11.226947525556813]
We propose a context-aware tracking model to optimize the tracker over the representation space, which jointly meta-update both branches by exploiting information along the whole sequence.
The proposed tracking method achieves an EAO score of 0.514 on VOT2018 with the speed of 40FPS, demonstrating its capability of improving the accuracy and robustness of the underlying tracker with little speed drop.
arXiv Detail & Related papers (2022-04-04T14:16:00Z) - Learning Dynamic Compact Memory Embedding for Deformable Visual Object
Tracking [82.34356879078955]
We propose a compact memory embedding to enhance the discrimination of the segmentation-based deformable visual tracking method.
Our method outperforms the excellent segmentation-based trackers, i.e., D3S and SiamMask on DAVIS 2017 benchmark.
arXiv Detail & Related papers (2021-11-23T03:07:12Z) - Learning Spatio-Appearance Memory Network for High-Performance Visual
Tracking [79.80401607146987]
Existing object tracking usually learns a bounding-box based template to match visual targets across frames, which cannot accurately learn a pixel-wise representation.
This paper presents a novel segmentation-based tracking architecture, which is equipped with a local-temporal memory network to learn accurate-temporal correspondence.
arXiv Detail & Related papers (2020-09-21T08:12:02Z) - DMV: Visual Object Tracking via Part-level Dense Memory and Voting-based
Retrieval [61.366644088881735]
We propose a novel memory-based tracker via part-level dense memory and voting-based retrieval, called DMV.
We also propose a novel voting mechanism for the memory reading to filter out unreliable information in the memory.
arXiv Detail & Related papers (2020-03-20T10:05:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.