ALLSH: Active Learning Guided by Local Sensitivity and Hardness
- URL: http://arxiv.org/abs/2205.04980v1
- Date: Tue, 10 May 2022 15:39:11 GMT
- Title: ALLSH: Active Learning Guided by Local Sensitivity and Hardness
- Authors: Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen,
Mingyuan Zhou
- Abstract summary: We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
- Score: 98.61023158378407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active learning, which effectively collects informative unlabeled data for
annotation, reduces the demand for labeled data. In this work, we propose to
retrieve unlabeled samples with a local sensitivity and hardness-aware
acquisition function. The proposed method generates data copies through local
perturbations and selects data points whose predictive likelihoods diverge the
most from their copies. We further empower our acquisition function by
injecting the select-worst case perturbation. Our method achieves consistent
gains over the commonly used active learning strategies in various
classification tasks. Furthermore, we observe consistent improvements over the
baselines on the study of prompt selection in prompt-based few-shot learning.
These experiments demonstrate that our acquisition guided by local sensitivity
and hardness can be effective and beneficial for many NLP tasks.
Related papers
- Active Prompt Learning with Vision-Language Model Priors [9.173468790066956]
We introduce a class-guided clustering that leverages the pre-trained image and text encoders of vision-language models.
We propose a budget-saving selective querying based on adaptive class-wise thresholds.
arXiv Detail & Related papers (2024-11-23T02:34:33Z) - Semi-Supervised Variational Adversarial Active Learning via Learning to Rank and Agreement-Based Pseudo Labeling [6.771578432805963]
Active learning aims to alleviate the amount of labor involved in data labeling by automating the selection of unlabeled samples.
We introduce novel techniques that significantly improve the use of abundant unlabeled data during training.
We demonstrate the superior performance of our approach over the state of the art on various image classification and segmentation benchmark datasets.
arXiv Detail & Related papers (2024-08-23T00:35:07Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - A Matter of Annotation: An Empirical Study on In Situ and Self-Recall Activity Annotations from Wearable Sensors [56.554277096170246]
We present an empirical study that evaluates and contrasts four commonly employed annotation methods in user studies focused on in-the-wild data collection.
For both the user-driven, in situ annotations, where participants annotate their activities during the actual recording process, and the recall methods, where participants retrospectively annotate their data at the end of each day, the participants had the flexibility to select their own set of activity classes and corresponding labels.
arXiv Detail & Related papers (2023-05-15T16:02:56Z) - MuRAL: Multi-Scale Region-based Active Learning for Object Detection [20.478741635006116]
We propose a novel approach called Multi-scale Region-based Active Learning (MuRAL) for object detection.
MuRAL identifies informative regions of various scales to reduce annotation costs for well-learned objects.
Our proposed method surpasses all existing coarse-grained and fine-grained baselines on Cityscapes and MS COCO datasets.
arXiv Detail & Related papers (2023-03-29T12:52:27Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Ask-n-Learn: Active Learning via Reliable Gradient Representations for
Image Classification [29.43017692274488]
Deep predictive models rely on human supervision in the form of labeled training data.
We propose Ask-n-Learn, an active learning approach based on gradient embeddings obtained using the pesudo-labels estimated in each of the algorithm.
arXiv Detail & Related papers (2020-09-30T05:19:56Z) - Active Sentence Learning by Adversarial Uncertainty Sampling in Discrete
Space [56.22272513894306]
Active learning for sentence understanding aims at discovering informative unlabeled data for annotation.
We argue that the typical uncertainty sampling method for active learning is time-consuming and can hardly work in real-time.
We propose adversarial uncertainty sampling in discrete space (AUSDS) to retrieve informative unlabeled samples more efficiently.
arXiv Detail & Related papers (2020-04-17T03:12:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.