Revisiting Sample Size Determination in Natural Language Understanding
- URL: http://arxiv.org/abs/2307.00374v1
- Date: Sat, 1 Jul 2023 16:08:52 GMT
- Title: Revisiting Sample Size Determination in Natural Language Understanding
- Authors: Ernie Chang, Muhammad Hassan Rashid, Pin-Jie Lin, Changsheng Zhao,
Vera Demberg, Yangyang Shi, Vikas Chandra
- Abstract summary: Knowing exactly how many data points need to be labeled to achieve a certain model performance is a beneficial step towards reducing the overall budgets for annotation.
We derived a simple yet effective approach to predict the maximum achievable model performance based on small amount of training samples.
- Score: 18.637079595450366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowing exactly how many data points need to be labeled to achieve a certain
model performance is a hugely beneficial step towards reducing the overall
budgets for annotation. It pertains to both active learning and traditional
data annotation, and is particularly beneficial for low resource scenarios.
Nevertheless, it remains a largely under-explored area of research in NLP. We
therefore explored various techniques for estimating the training sample size
necessary to achieve a targeted performance value. We derived a simple yet
effective approach to predict the maximum achievable model performance based on
small amount of training samples - which serves as an early indicator during
data annotation for data quality and sample size determination. We performed
ablation studies on four language understanding tasks, and showed that the
proposed approach allows us to forecast model performance within a small margin
of mean absolute error (~ 0.9%) with only 10% data.
Related papers
- An Information Theoretic Approach to Machine Unlearning [45.600917449314444]
Key challenge in unlearning is forgetting the necessary data in a timely manner, while preserving model performance.
In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.
We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - SPEC: Summary Preference Decomposition for Low-Resource Abstractive
Summarization [21.037841262371355]
We present a framework to transfer few-shot learning processes from source corpora to the target corpus.
Our methods achieve state-of-the-art performance on six diverse corpora with 30.11%/33.95%/27.51% and 26.74%/31.14%/24.48% average improvements on ROUGE-1/2/L under 10- and 100-example settings.
arXiv Detail & Related papers (2023-03-24T14:07:03Z) - Selective In-Context Data Augmentation for Intent Detection using
Pointwise V-Information [100.03188187735624]
We introduce a novel approach based on PLMs and pointwise V-information (PVI), a metric that can measure the usefulness of a datapoint for training a model.
Our method first fine-tunes a PLM on a small seed of training data and then synthesizes new datapoints - utterances that correspond to given intents.
Our method is thus able to leverage the expressive power of large language models to produce diverse training data.
arXiv Detail & Related papers (2023-02-10T07:37:49Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Semi-Supervised Active Learning with Temporal Output Discrepancy [42.01906895756629]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2021-07-29T16:25:56Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Identifying Wrongly Predicted Samples: A Method for Active Learning [6.976600214375139]
We propose a simple sample selection criterion that moves beyond uncertainty.
We show state-of-the-art results and better rates at identifying wrongly predicted samples.
arXiv Detail & Related papers (2020-10-14T09:00:42Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.