Improving Siamese Based Trackers with Light or No Training through Multiple Templates and Temporal Network
- URL: http://arxiv.org/abs/2211.13812v2
- Date: Tue, 15 Oct 2024 07:42:07 GMT
- Title: Improving Siamese Based Trackers with Light or No Training through Multiple Templates and Temporal Network
- Authors: Ali Sekhavati, Won-Sook Lee,
- Abstract summary: We propose a framework with two ideas on Siamese-based trackers.
(i) Extending number of templates in a way that removes the need to retrain the network.
(ii) a lightweight temporal network with a novel architecture focusing on both local and global information.
- Score: 0.0
- License:
- Abstract: High computational power and significant time are usually needed to train a deep learning based tracker on large datasets. Depending on many factors, training might not always be an option. In this paper, we propose a framework with two ideas on Siamese-based trackers. (i) Extending number of templates in a way that removes the need to retrain the network and (ii) a lightweight temporal network with a novel architecture focusing on both local and global information that can be used independently from trackers. Most Siamese-based trackers only rely on the first frame as the ground truth for objects and struggle when the target's appearance changes significantly in subsequent frames in presence of similar distractors. Some trackers use multiple templates which mostly rely on constant thresholds to update, or they replace those templates that have low similarity scores only with more similar ones. Unlike previous works, we use adaptive thresholds that update the bag with similar templates as well as those templates which are slightly diverse. Adaptive thresholds also cause an overall improvement over constant ones. In addition, mixing feature maps obtained by each template in the last stage of networks removes the need to retrain trackers. Our proposed lightweight temporal network, CombiNet, learns the path history of different objects using only object coordinates and predicts target's potential location in the next frame. It is tracker independent and applying it on new trackers does not need further training. By implementing these ideas, trackers' performance improved on all datasets tested on, including LaSOT, LaSOT extension, TrackingNet, OTB100, OTB50, UAV123 and UAV20L. Experiments indicate the proposed framework works well with both convolutional and transformer-based trackers. The official python code for this paper will be publicly available upon publication.
Related papers
- Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance [87.19164603145056]
We propose LoRAT, a method that unveils the power of large ViT model for tracking within laboratory-level resources.
The essence of our work lies in adapting LoRA, a technique that fine-tunes a small subset of model parameters without adding inference latency.
We design an anchor-free head solely based on to adapt PETR, enabling better performance with less computational overhead.
arXiv Detail & Related papers (2024-03-08T11:41:48Z) - A new hope for network model generalization [66.5377859849467]
Generalizing machine learning models for network traffic dynamics tends to be considered a lost cause.
An ML architecture called_Transformer_ has enabled previously unimaginable generalization in other domains.
We propose a Network Traffic Transformer (NTT) to learn network dynamics from packet traces.
arXiv Detail & Related papers (2022-07-12T21:16:38Z) - Context-aware Visual Tracking with Joint Meta-updating [11.226947525556813]
We propose a context-aware tracking model to optimize the tracker over the representation space, which jointly meta-update both branches by exploiting information along the whole sequence.
The proposed tracking method achieves an EAO score of 0.514 on VOT2018 with the speed of 40FPS, demonstrating its capability of improving the accuracy and robustness of the underlying tracker with little speed drop.
arXiv Detail & Related papers (2022-04-04T14:16:00Z) - Unified Transformer Tracker for Object Tracking [58.65901124158068]
We present the Unified Transformer Tracker (UTT) to address tracking problems in different scenarios with one paradigm.
A track transformer is developed in our UTT to track the target in both Single Object Tracking (SOT) and Multiple Object Tracking (MOT)
arXiv Detail & Related papers (2022-03-29T01:38:49Z) - Updatable Siamese Tracker with Two-stage One-shot Learning [10.13621503834501]
offline Siamese networks have achieved very promising tracking performance, especially in accuracy and efficiency.
Traditional updaters are difficult to process the irregular variations and sampling noises of objects, so it is quite risky to adopt them to update Siamese networks.
In this paper, we first present a two-stage one-shot learner, which can predict the local parameters of primary classifier with object samples from diverse stages.
Then, an updatable Siamese network is proposed based on the learner (SiamTOL), which is able to complement online update by itself.
arXiv Detail & Related papers (2021-04-30T15:18:41Z) - STMTrack: Template-free Visual Tracking with Space-time Memory Networks [42.06375415765325]
Existing trackers with template updating mechanisms rely on time-consuming numerical optimization and complex hand-designed strategies to achieve competitive performance.
We propose a novel tracking framework built on top of a space-time memory network that is competent to make full use of historical information related to the target.
Specifically, a novel memory mechanism is introduced, which stores the historical information of the target to guide the tracker to focus on the most informative regions in the current frame.
arXiv Detail & Related papers (2021-04-01T08:10:56Z) - Multiple Convolutional Features in Siamese Networks for Object Tracking [13.850110645060116]
Multiple Features-Siamese Tracker (MFST) is a novel tracking algorithm exploiting several hierarchical feature maps for robust tracking.
MFST achieves high tracking accuracy, while outperforming the standard siamese tracker on object tracking benchmarks.
arXiv Detail & Related papers (2021-03-01T08:02:27Z) - MFST: Multi-Features Siamese Tracker [13.850110645060116]
Multi-Features Siamese Tracker (MFST) is a novel tracking algorithm exploiting several hierarchical feature maps for robust deep similarity tracking.
MFST achieves high tracking accuracy, while outperforming several state-of-the-art trackers, including standard Siamese trackers.
arXiv Detail & Related papers (2021-03-01T07:18:32Z) - Learning Spatio-Appearance Memory Network for High-Performance Visual
Tracking [79.80401607146987]
Existing object tracking usually learns a bounding-box based template to match visual targets across frames, which cannot accurately learn a pixel-wise representation.
This paper presents a novel segmentation-based tracking architecture, which is equipped with a local-temporal memory network to learn accurate-temporal correspondence.
arXiv Detail & Related papers (2020-09-21T08:12:02Z) - Tracking by Instance Detection: A Meta-Learning Approach [99.66119903655711]
We propose a principled three-step approach to build a high-performance tracker.
We build two trackers, named Retina-MAML and FCOS-MAML, based on two modern detectors RetinaNet and FCOS.
Both trackers run in real-time at 40 FPS.
arXiv Detail & Related papers (2020-04-02T05:55:06Z) - High-Performance Long-Term Tracking with Meta-Updater [75.80564183653274]
Long-term visual tracking has drawn increasing attention because it is much closer to practical applications than short-term tracking.
Most top-ranked long-term trackers adopt the offline-trained Siamese architectures, thus, they cannot benefit from great progress of short-term trackers with online update.
We propose a novel offline-trained Meta-Updater to address an important but unsolved problem: Is the tracker ready for updating in the current frame?
arXiv Detail & Related papers (2020-04-01T09:29:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.