Unveiling the Multi-Annotation Process: Examining the Influence of
Annotation Quantity and Instance Difficulty on Model Performance
- URL: http://arxiv.org/abs/2310.14572v1
- Date: Mon, 23 Oct 2023 05:12:41 GMT
- Title: Unveiling the Multi-Annotation Process: Examining the Influence of
Annotation Quantity and Instance Difficulty on Model Performance
- Authors: Pritam Kadasi and Mayank Singh
- Abstract summary: We show how performance scores can vary when a dataset expands from a single annotation per instance to multiple annotations.
We propose a novel multi-annotator simulation process to generate datasets with varying annotation budgets.
- Score: 1.7343894615131372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The NLP community has long advocated for the construction of multi-annotator
datasets to better capture the nuances of language interpretation,
subjectivity, and ambiguity. This paper conducts a retrospective study to show
how performance scores can vary when a dataset expands from a single annotation
per instance to multiple annotations. We propose a novel multi-annotator
simulation process to generate datasets with varying annotation budgets. We
show that similar datasets with the same annotation budget can lead to varying
performance gains. Our findings challenge the popular belief that models
trained on multi-annotation examples always lead to better performance than
models trained on single or few-annotation examples.
Related papers
- Mitigating Biases to Embrace Diversity: A Comprehensive Annotation Benchmark for Toxic Language [0.0]
This study introduces a prescriptive annotation benchmark grounded in humanities research to ensure consistent, unbiased labeling of offensive language.
We contribute two newly annotated datasets that achieve higher inter-annotator agreement between human and language model (LLM) annotations.
arXiv Detail & Related papers (2024-10-17T08:10:24Z) - Label-Efficient Model Selection for Text Generation [14.61636207880449]
We introduce DiffUse, a method to make an informed decision between candidate text generation models based on preference annotations.
In a series of experiments over hundreds of model pairs, we demonstrate that DiffUse can dramatically reduce the required number of annotations.
arXiv Detail & Related papers (2024-02-12T18:54:02Z) - CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large
Language Models for Data Annotation [94.59630161324013]
We propose CoAnnotating, a novel paradigm for Human-LLM co-annotation of unstructured texts at scale.
Our empirical study shows CoAnnotating to be an effective means to allocate work from results on different datasets, with up to 21% performance improvement over random baseline.
arXiv Detail & Related papers (2023-10-24T08:56:49Z) - Few-shot Text Classification with Dual Contrastive Consistency [31.141350717029358]
In this paper, we explore how to utilize pre-trained language model to perform few-shot text classification.
We adopt supervised contrastive learning on few labeled data and consistency-regularization on vast unlabeled data.
arXiv Detail & Related papers (2022-09-29T19:26:23Z) - Selective Annotation Makes Language Models Better Few-Shot Learners [97.07544941620367]
Large language models can perform in-context learning, where they learn a new task from a few task demonstrations.
This work examines the implications of in-context learning for the creation of datasets for new natural language tasks.
We propose an unsupervised, graph-based selective annotation method, voke-k, to select diverse, representative examples to annotate.
arXiv Detail & Related papers (2022-09-05T14:01:15Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Multivariate Data Explanation by Jumping Emerging Patterns Visualization [78.6363825307044]
We present VAX (multiVariate dAta eXplanation), a new VA method to support the identification and visual interpretation of patterns in multivariate data sets.
Unlike the existing similar approaches, VAX uses the concept of Jumping Emerging Patterns to identify and aggregate several diversified patterns, producing explanations through logic combinations of data variables.
arXiv Detail & Related papers (2021-06-21T13:49:44Z) - UmBERTo-MTSA @ AcCompl-It: Improving Complexity and Acceptability
Prediction with Multi-task Learning on Self-Supervised Annotations [0.0]
This work describes a self-supervised data augmentation approach used to improve learning models' performances when only a moderate amount of labeled data is available.
Nerve language models are fine-tuned using this procedure in the context of the AcCompl-it shared task at EVALITA 2020.
arXiv Detail & Related papers (2020-11-10T15:50:37Z) - Towards Understanding Sample Variance in Visually Grounded Language
Generation: Evaluations and Observations [67.4375210552593]
We design experiments to understand an important but often ignored problem in visually grounded language generation.
Given that humans have different utilities and visual attention, how will the sample variance in multi-reference datasets affect the models' performance?
We show that it is of paramount importance to report variance in experiments; that human-generated references could vary drastically in different datasets/tasks, revealing the nature of each task.
arXiv Detail & Related papers (2020-10-07T20:45:14Z) - Joint Multi-Dimensional Model for Global and Time-Series Annotations [48.159050222769494]
Crowdsourcing is a popular approach to collect annotations for unlabeled data instances.
It involves collecting a large number of annotations from several, often naive untrained annotators for each data instance which are then combined to estimate the ground truth.
Most annotation fusion schemes however ignore this aspect and model each dimension separately.
We propose a generative model for multi-dimensional annotation fusion, which models the dimensions jointly leading to more accurate ground truth estimates.
arXiv Detail & Related papers (2020-05-06T20:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.