Unsupervised Deep Representation Learning for Real-Time Tracking
- URL: http://arxiv.org/abs/2007.11984v1
- Date: Wed, 22 Jul 2020 08:23:12 GMT
- Title: Unsupervised Deep Representation Learning for Real-Time Tracking
- Authors: Ning Wang and Wengang Zhou and Yibing Song and Chao Ma and Wei Liu and
Houqiang Li
- Abstract summary: We propose an unsupervised learning method for visual tracking.
The motivation of our unsupervised learning is that a robust tracker should be effective in bidirectional tracking.
We build our framework on a Siamese correlation filter network, and propose a multi-frame validation scheme and a cost-sensitive loss to facilitate unsupervised learning.
- Score: 137.69689503237893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advancement of visual tracking has continuously been brought by deep
learning models. Typically, supervised learning is employed to train these
models with expensive labeled data. In order to reduce the workload of manual
annotations and learn to track arbitrary objects, we propose an unsupervised
learning method for visual tracking. The motivation of our unsupervised
learning is that a robust tracker should be effective in bidirectional
tracking. Specifically, the tracker is able to forward localize a target object
in successive frames and backtrace to its initial position in the first frame.
Based on such a motivation, in the training process, we measure the consistency
between forward and backward trajectories to learn a robust tracker from
scratch merely using unlabeled videos. We build our framework on a Siamese
correlation filter network, and propose a multi-frame validation scheme and a
cost-sensitive loss to facilitate unsupervised learning. Without bells and
whistles, the proposed unsupervised tracker achieves the baseline accuracy as
classic fully supervised trackers while achieving a real-time speed.
Furthermore, our unsupervised framework exhibits a potential in leveraging more
unlabeled or weakly labeled data to further improve the tracking accuracy.
Related papers
- Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - Unsupervised Learning of Accurate Siamese Tracking [68.58171095173056]
We present a novel unsupervised tracking framework, in which we can learn temporal correspondence both on the classification branch and regression branch.
Our tracker outperforms preceding unsupervised methods by a substantial margin, performing on par with supervised methods on large-scale datasets such as TrackingNet and LaSOT.
arXiv Detail & Related papers (2022-04-04T13:39:43Z) - Learning to Track Objects from Unlabeled Videos [63.149201681380305]
In this paper, we propose to learn an Unsupervised Single Object Tracker (USOT) from scratch.
To narrow the gap between unsupervised trackers and supervised counterparts, we propose an effective unsupervised learning approach composed of three stages.
Experiments show that the proposed USOT learned from unlabeled videos performs well over the state-of-the-art unsupervised trackers by large margins.
arXiv Detail & Related papers (2021-08-28T22:10:06Z) - Self-supervised Object Tracking with Cycle-consistent Siamese Networks [55.040249900677225]
We exploit an end-to-end Siamese network in a cycle-consistent self-supervised framework for object tracking.
We propose to integrate a Siamese region proposal and mask regression network in our tracking framework so that a fast and more accurate tracker can be learned without the annotation of each frame.
arXiv Detail & Related papers (2020-08-03T04:10:38Z) - Tracking-by-Trackers with a Distilled and Reinforced Model [24.210580784051277]
A compact student model is trained via the marriage of knowledge distillation and reinforcement learning.
The proposed algorithms compete with real-time state-of-the-art trackers.
arXiv Detail & Related papers (2020-07-08T13:24:04Z) - Simple Unsupervised Multi-Object Tracking [11.640210313011876]
In this work, we propose an unsupervised re-identification network, thus sidestepping the labeling costs entirely.
Given unlabeled videos, our proposed method (SimpleReID) first generates tracking labels using SORT and trains a ReID network to predict the generated labels using crossentropy loss.
We establish a new state-of-the-art performance on popular datasets like MOT16/17 without using tracking supervision, beating current best (CenterTrack) by 0.2-0.3 MOTA and 4.4-4.8 IDF1 scores.
arXiv Detail & Related papers (2020-06-04T01:53:18Z) - Robust Visual Object Tracking with Two-Stream Residual Convolutional
Networks [62.836429958476735]
We propose a Two-Stream Residual Convolutional Network (TS-RCN) for visual tracking.
Our TS-RCN can be integrated with existing deep learning based visual trackers.
To further improve the tracking performance, we adopt a "wider" residual network ResNeXt as its feature extraction backbone.
arXiv Detail & Related papers (2020-05-13T19:05:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.