Robust Visual Tracking via Statistical Positive Sample Generation and
Gradient Aware Learning
- URL: http://arxiv.org/abs/2011.04260v1
- Date: Mon, 9 Nov 2020 09:14:58 GMT
- Title: Robust Visual Tracking via Statistical Positive Sample Generation and
Gradient Aware Learning
- Authors: Lijian Lin, Haosheng Chen, Yanjie Liang, Yan Yan, Hanzi Wang
- Abstract summary: CNN based trackers have achieved state-of-the-art performance on multiple benchmark datasets.
We propose a robust tracking method via Statistical Positive sample generation and Gradient Aware learning (SPGA)
We show that the proposed SPGA performs favorably against several state-of-the-art trackers.
- Score: 28.60114425270413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Convolutional Neural Network (CNN) based trackers have
achieved state-of-the-art performance on multiple benchmark datasets. Most of
these trackers train a binary classifier to distinguish the target from its
background. However, they suffer from two limitations. Firstly, these trackers
cannot effectively handle significant appearance variations due to the limited
number of positive samples. Secondly, there exists a significant imbalance of
gradient contributions between easy and hard samples, where the easy samples
usually dominate the computation of gradient. In this paper, we propose a
robust tracking method via Statistical Positive sample generation and Gradient
Aware learning (SPGA) to address the above two limitations. To enrich the
diversity of positive samples, we present an effective and efficient
statistical positive sample generation algorithm to generate positive samples
in the feature space. Furthermore, to handle the issue of imbalance between
easy and hard samples, we propose a gradient sensitive loss to harmonize the
gradient contributions between easy and hard samples. Extensive experiments on
three challenging benchmark datasets including OTB50, OTB100 and VOT2016
demonstrate that the proposed SPGA performs favorably against several
state-of-the-art trackers.
Related papers
- Data Pruning via Moving-one-Sample-out [61.45441981346064]
We propose a novel data-pruning approach called moving-one-sample-out (MoSo)
MoSo aims to identify and remove the least informative samples from the training set.
Experimental results demonstrate that MoSo effectively mitigates severe performance degradation at high pruning ratios.
arXiv Detail & Related papers (2023-10-23T08:00:03Z) - Split-PU: Hardness-aware Training Strategy for Positive-Unlabeled
Learning [42.26185670834855]
Positive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples.
This paper focuses on improving the commonly-used nnPU with a novel training pipeline.
arXiv Detail & Related papers (2022-11-30T05:48:31Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - Active Learning for Deep Visual Tracking [51.5063680734122]
Convolutional neural networks (CNNs) have been successfully applied to the single target tracking task in recent years.
In this paper, we propose an active learning method for deep visual tracking, which selects and annotates the unlabeled samples to train the deep CNNs model.
Under the guidance of active learning, the tracker based on the trained deep CNNs model can achieve competitive tracking performance while reducing the labeling cost.
arXiv Detail & Related papers (2021-10-17T11:47:56Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Hard Negative Samples Emphasis Tracker without Anchors [10.616828072065093]
We address the problem that distinguishes the tracking target from hard negative samples in the tracking phase.
We propose a simple yet efficient hard negative samples emphasis method, which constrains Siamese network to learn features that are aware of hard negative samples.
We also explore a novel anchor-free tracking framework in a per-pixel prediction fashion.
arXiv Detail & Related papers (2020-08-08T12:38:38Z) - CSI: Novelty Detection via Contrastive Learning on Distributionally
Shifted Instances [77.28192419848901]
We propose a simple, yet effective method named contrasting shifted instances (CSI)
In addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself.
Our experiments demonstrate the superiority of our method under various novelty detection scenarios.
arXiv Detail & Related papers (2020-07-16T08:32:56Z) - Cascaded Regression Tracking: Towards Online Hard Distractor
Discrimination [202.2562153608092]
We propose a cascaded regression tracker with two sequential stages.
In the first stage, we filter out abundant easily-identified negative candidates.
In the second stage, a discrete sampling based ridge regression is designed to double-check the remaining ambiguous hard samples.
arXiv Detail & Related papers (2020-06-18T07:48:01Z) - M$^5$L: Multi-Modal Multi-Margin Metric Learning for RGBT Tracking [44.296318907168]
Classifying the confusing samples in the course of RGBT tracking is a challenging problem.
We propose a novel Multi-Modal Multi-Margin Metric Learning framework, named M$5$L for RGBT tracking.
Our framework clearly improves the tracking performance and outperforms the state-of-the-art RGBT trackers.
arXiv Detail & Related papers (2020-03-17T11:37:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.