Progressive Multi-Stage Learning for Discriminative Tracking
- URL: http://arxiv.org/abs/2004.00255v1
- Date: Wed, 1 Apr 2020 07:01:30 GMT
- Title: Progressive Multi-Stage Learning for Discriminative Tracking
- Authors: Weichao Li, Xi Li, Omar Elfarouk Bourahla, Fuxian Huang, Fei Wu, Wei
Liu, Zhiheng Wang, and Hongmin Liu
- Abstract summary: We propose a joint discriminative learning scheme with the progressive multi-stage optimization policy of sample selection for robust visual tracking.
The proposed scheme presents a novel time-weighted and detection-guided self-paced learning strategy for easy-to-hard sample selection.
Experiments on the benchmark datasets demonstrate the effectiveness of the proposed learning framework.
- Score: 25.94944743206374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual tracking is typically solved as a discriminative learning problem that
usually requires high-quality samples for online model adaptation. It is a
critical and challenging problem to evaluate the training samples collected
from previous predictions and employ sample selection by their quality to train
the model.
To tackle the above problem, we propose a joint discriminative learning
scheme with the progressive multi-stage optimization policy of sample selection
for robust visual tracking. The proposed scheme presents a novel time-weighted
and detection-guided self-paced learning strategy for easy-to-hard sample
selection, which is capable of tolerating relatively large intra-class
variations while maintaining inter-class separability. Such a self-paced
learning strategy is jointly optimized in conjunction with the discriminative
tracking process, resulting in robust tracking results. Experiments on the
benchmark datasets demonstrate the effectiveness of the proposed learning
framework.
Related papers
- Deep Learning Meets Oversampling: A Learning Framework to Handle Imbalanced Classification [0.0]
We propose a novel learning framework that can generate synthetic data instances in a data-driven manner.
The proposed framework formulates the oversampling process as a composition of discrete decision criteria.
Experiments on the imbalanced classification task demonstrate the superiority of our framework over state-of-the-art algorithms.
arXiv Detail & Related papers (2025-02-08T13:35:00Z) - A Systematic Examination of Preference Learning through the Lens of Instruction-Following [83.71180850955679]
We use a novel synthetic data generation pipeline to generate 48,000 instruction unique-following prompts.
With our synthetic prompts, we use two preference dataset curation methods - rejection sampling (RS) and Monte Carlo Tree Search (MCTS)
Experiments reveal that shared prefixes in preference pairs, as generated by MCTS, provide marginal but consistent improvements.
High-contrast preference pairs generally outperform low-contrast pairs; however, combining both often yields the best performance.
arXiv Detail & Related papers (2024-12-18T15:38:39Z) - Maximally Separated Active Learning [32.98415531556376]
We propose an active learning method that utilizes fixed equiangular hyperspherical points as class prototypes.
We demonstrate strong performance over existing active learning techniques across five benchmark datasets.
arXiv Detail & Related papers (2024-11-26T14:02:43Z) - Diversified Batch Selection for Training Acceleration [68.67164304377732]
A prevalent research line, known as online batch selection, explores selecting informative subsets during the training process.
vanilla reference-model-free methods involve independently scoring and selecting data in a sample-wise manner.
We propose Diversified Batch Selection (DivBS), which is reference-model-free and can efficiently select diverse and representative samples.
arXiv Detail & Related papers (2024-06-07T12:12:20Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Towards Accelerated Model Training via Bayesian Data Selection [45.62338106716745]
We propose a more reasonable data selection principle by examining the data's impact on the model's generalization loss.
Recent work has proposed a more reasonable data selection principle by examining the data's impact on the model's generalization loss.
This work solves these problems by leveraging a lightweight Bayesian treatment and incorporating off-the-shelf zero-shot predictors built on large-scale pre-trained models.
arXiv Detail & Related papers (2023-08-21T07:58:15Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback [31.826205004616227]
Client sampling plays an important role in federated learning (FL) systems as it affects the convergence rate of optimization algorithms.
We propose an online mirror descent (OSMD) algorithm designed to minimize the sampling variance.
We show how our sampling method can improve the convergence speed of federated optimization algorithms over the widely used uniform sampling.
arXiv Detail & Related papers (2021-12-28T23:50:52Z) - Active Learning for Deep Visual Tracking [51.5063680734122]
Convolutional neural networks (CNNs) have been successfully applied to the single target tracking task in recent years.
In this paper, we propose an active learning method for deep visual tracking, which selects and annotates the unlabeled samples to train the deep CNNs model.
Under the guidance of active learning, the tracker based on the trained deep CNNs model can achieve competitive tracking performance while reducing the labeling cost.
arXiv Detail & Related papers (2021-10-17T11:47:56Z) - Unleashing the Power of Contrastive Self-Supervised Visual Models via
Contrast-Regularized Fine-Tuning [94.35586521144117]
We investigate whether applying contrastive learning to fine-tuning would bring further benefits.
We propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models.
arXiv Detail & Related papers (2021-02-12T16:31:24Z) - Message Passing Adaptive Resonance Theory for Online Active
Semi-supervised Learning [30.19936050747407]
We propose Message Passing Adaptive Resonance Theory (MPART) for online active semi-supervised learning.
MPART infers the class of unlabeled data and selects informative and representative samples through message passing between nodes on the topological graph.
We evaluate our model with comparable query selection strategies and frequencies, showing that MPART significantly outperforms the competitive models in online active learning environments.
arXiv Detail & Related papers (2020-12-02T14:14:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.