State-Relabeling Adversarial Active Learning
- URL: http://arxiv.org/abs/2004.04943v1
- Date: Fri, 10 Apr 2020 08:23:59 GMT
- Title: State-Relabeling Adversarial Active Learning
- Authors: Beichen Zhang (1), Liang Li (2), Shijie Yang (1, 2), Shuhui Wang (2),
Zheng-Jun Zha (3), Qingming Huang (1, 2, 4) ((1) University of Chinese
Academy of Sciences. (2) Key Lab of Intell. Info. Process., Inst. of Comput.
Tech., Chinese Academy of Sciences. (3) University of Science and Technology
of China. (4) Peng Cheng Laboratory.)
- Abstract summary: Active learning is to design label-efficient algorithms by sampling the most representative samples to be labeled by an oracle.
We propose a state relabeling adversarial active learning model (SRAAL), that leverages both the annotation and the labeled/unlabeled state information.
Our model outperforms the previous state-of-art active learning methods and our initially sampling algorithm achieves better performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning is to design label-efficient algorithms by sampling the most
representative samples to be labeled by an oracle. In this paper, we propose a
state relabeling adversarial active learning model (SRAAL), that leverages both
the annotation and the labeled/unlabeled state information for deriving the
most informative unlabeled samples. The SRAAL consists of a representation
generator and a state discriminator. The generator uses the complementary
annotation information with traditional reconstruction information to generate
the unified representation of samples, which embeds the semantic into the whole
data representation. Then, we design an online uncertainty indicator in the
discriminator, which endues unlabeled samples with different importance. As a
result, we can select the most informative samples based on the discriminator's
predicted state. We also design an algorithm to initialize the labeled pool,
which makes subsequent sampling more efficient. The experiments conducted on
various datasets show that our model outperforms the previous state-of-art
active learning methods and our initially sampling algorithm achieves better
performance.
Related papers
- ADROIT: A Self-Supervised Framework for Learning Robust Representations for Active Learning [9.89630586942325]
This paper introduces a unified representation learning framework tailored for active learning with task awareness.
It integrates diverse sources, comprising reconstruction, adversarial, self-supervised, knowledge-distillation, and classification losses into a unified VAE-based ADROIT approach.
arXiv Detail & Related papers (2025-03-10T16:28:04Z) - Semi-Supervised Variational Adversarial Active Learning via Learning to Rank and Agreement-Based Pseudo Labeling [6.771578432805963]
Active learning aims to alleviate the amount of labor involved in data labeling by automating the selection of unlabeled samples.
We introduce novel techniques that significantly improve the use of abundant unlabeled data during training.
We demonstrate the superior performance of our approach over the state of the art on various image classification and segmentation benchmark datasets.
arXiv Detail & Related papers (2024-08-23T00:35:07Z) - Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Deep Active Learning with Contrastive Learning Under Realistic Data Pool
Assumptions [2.578242050187029]
Active learning aims to identify the most informative data from an unlabeled data pool that enables a model to reach the desired accuracy rapidly.
Most existing active learning methods have been evaluated in an ideal setting where only samples relevant to the target task exist in an unlabeled data pool.
We introduce new active learning benchmarks that include ambiguous, task-irrelevant out-of-distribution as well as in-distribution samples.
arXiv Detail & Related papers (2023-03-25T10:46:10Z) - Exploiting Diversity of Unlabeled Data for Label-Efficient
Semi-Supervised Active Learning [57.436224561482966]
Active learning is a research area that addresses the issues of expensive labeling by selecting the most important samples for labeling.
We introduce a new diversity-based initial dataset selection algorithm to select the most informative set of samples for initial labeling in the active learning setting.
Also, we propose a novel active learning query strategy, which uses diversity-based sampling on consistency-based embeddings.
arXiv Detail & Related papers (2022-07-25T16:11:55Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Semi-supervised Active Learning for Instance Segmentation via Scoring
Predictions [25.408505612498423]
We propose a novel and principled semi-supervised active learning framework for instance segmentation.
Specifically, we present an uncertainty sampling strategy named Triplet Scoring Predictions (TSP) to explicitly incorporate samples ranking clues from classes, bounding boxes and masks.
Results on medical images datasets demonstrate that the proposed method results in the embodiment of knowledge from available data in a meaningful way.
arXiv Detail & Related papers (2020-12-09T02:36:52Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - Online Active Model Selection for Pre-trained Classifiers [72.84853880948894]
We design an online selective sampling approach that actively selects informative examples to label and outputs the best model with high probability at any round.
Our algorithm can be used for online prediction tasks for both adversarial and streams.
arXiv Detail & Related papers (2020-10-19T19:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.