Reducing Confusion in Active Learning for Part-Of-Speech Tagging
- URL: http://arxiv.org/abs/2011.00767v2
- Date: Sat, 21 Nov 2020 01:20:52 GMT
- Title: Reducing Confusion in Active Learning for Part-Of-Speech Tagging
- Authors: Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, Graham Neubig
- Abstract summary: Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost.
We study the problem of selecting instances which maximally reduce the confusion between particular pairs of output tags.
Our proposed AL strategy outperforms other AL strategies by a significant margin.
- Score: 100.08742107682264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning (AL) uses a data selection algorithm to select useful
training samples to minimize annotation cost. This is now an essential tool for
building low-resource syntactic analyzers such as part-of-speech (POS) taggers.
Existing AL heuristics are generally designed on the principle of selecting
uncertain yet representative training instances, where annotating these
instances may reduce a large number of errors. However, in an empirical study
across six typologically diverse languages (German, Swedish, Galician, North
Sami, Persian, and Ukrainian), we found the surprising result that even in an
oracle scenario where we know the true uncertainty of predictions, these
current heuristics are far from optimal. Based on this analysis, we pose the
problem of AL as selecting instances which maximally reduce the confusion
between particular pairs of output tags. Extensive experimentation on the
aforementioned languages shows that our proposed AL strategy outperforms other
AL strategies by a significant margin. We also present auxiliary results
demonstrating the importance of proper calibration of models, which we ensure
through cross-view training, and analysis demonstrating how our proposed
strategy selects examples that more closely follow the oracle data
distribution.
Related papers
- Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Experimental Design for Active Transductive Inference in Large Language Models [18.2671641610825]
We use active learning for adaptive prompt design and call it Active In-context Prompt Design (AIPD)
We design the LLM prompt by adaptively choosing few-shot examples from a training set to optimize performance on a test set.
We propose two algorithms, GO and SAL, which differ in how the few-shot examples are chosen.
arXiv Detail & Related papers (2024-04-12T23:27:46Z) - Active Learning for Natural Language Generation [17.14395724301382]
We present a first systematic study of active learning for Natural Language Generation.
Our results indicate that the performance of existing AL strategies is inconsistent.
We highlight some notable differences between the classification and generation scenarios, and analyze the selection behaviors of existing AL strategies.
arXiv Detail & Related papers (2023-05-24T11:27:53Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Active Learning for Abstractive Text Summarization [50.79416783266641]
We propose the first effective query strategy for Active Learning in abstractive text summarization.
We show that using our strategy in AL annotation helps to improve the model performance in terms of ROUGE and consistency scores.
arXiv Detail & Related papers (2023-01-09T10:33:14Z) - Learning New Tasks from a Few Examples with Soft-Label Prototypes [18.363177410917597]
We propose a novel few-shot learning approach based on soft-label prototypes (SLPs)
We focus on learning previously unseen NLP tasks from very few examples (4, 8, 16) per class.
We experimentally demonstrate that our approach achieves superior performance on the majority of tested tasks in this data-lean setting.
arXiv Detail & Related papers (2022-10-31T16:06:48Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Active Learning at the ImageNet Scale [43.595076693347835]
In this work, we study a combination of active learning (AL) and pretraining (SSP) on ImageNet.
We find that performance on small toy datasets is not representative of performance on ImageNet due to the class imbalanced samples selected by an active learner.
We propose Balanced Selection (BASE), a simple, scalable AL algorithm that outperforms random sampling consistently.
arXiv Detail & Related papers (2021-11-25T02:48:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.