Active Sentence Learning by Adversarial Uncertainty Sampling in Discrete
Space
- URL: http://arxiv.org/abs/2004.08046v2
- Date: Wed, 28 Oct 2020 04:45:49 GMT
- Title: Active Sentence Learning by Adversarial Uncertainty Sampling in Discrete
Space
- Authors: Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Weinan
Zhang, Yong Yu, Lei Li
- Abstract summary: Active learning for sentence understanding aims at discovering informative unlabeled data for annotation.
We argue that the typical uncertainty sampling method for active learning is time-consuming and can hardly work in real-time.
We propose adversarial uncertainty sampling in discrete space (AUSDS) to retrieve informative unlabeled samples more efficiently.
- Score: 56.22272513894306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning for sentence understanding aims at discovering informative
unlabeled data for annotation and therefore reducing the demand for labeled
data. We argue that the typical uncertainty sampling method for active learning
is time-consuming and can hardly work in real-time, which may lead to
ineffective sample selection. We propose adversarial uncertainty sampling in
discrete space (AUSDS) to retrieve informative unlabeled samples more
efficiently. AUSDS maps sentences into latent space generated by the popular
pre-trained language models, and discover informative unlabeled text samples
for annotation via adversarial attack. The proposed approach is extremely
efficient compared with traditional uncertainty sampling with more than 10x
speedup. Experimental results on five datasets show that AUSDS outperforms
strong baselines on effectiveness.
Related papers
- XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Deep Active Learning with Contrastive Learning Under Realistic Data Pool
Assumptions [2.578242050187029]
Active learning aims to identify the most informative data from an unlabeled data pool that enables a model to reach the desired accuracy rapidly.
Most existing active learning methods have been evaluated in an ideal setting where only samples relevant to the target task exist in an unlabeled data pool.
We introduce new active learning benchmarks that include ambiguous, task-irrelevant out-of-distribution as well as in-distribution samples.
arXiv Detail & Related papers (2023-03-25T10:46:10Z) - Forgetful Active Learning with Switch Events: Efficient Sampling for
Out-of-Distribution Data [13.800680101300756]
In practice, fully trained neural networks interact randomly with out-of-distribution (OOD) inputs.
We introduce forgetful active learning with switch events (FALSE) - a novel active learning protocol for out-of-distribution active learning.
We report up to 4.5% accuracy improvements in over 270 experiments.
arXiv Detail & Related papers (2023-01-12T16:03:14Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - ALLSH: Active Learning Guided by Local Sensitivity and Hardness [98.61023158378407]
We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
arXiv Detail & Related papers (2022-05-10T15:39:11Z) - Semi-Supervised Active Learning with Temporal Output Discrepancy [42.01906895756629]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2021-07-29T16:25:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.