Active Learning for Coreference Resolution using Discrete Annotation
- URL: http://arxiv.org/abs/2004.13671v3
- Date: Tue, 19 May 2020 00:31:23 GMT
- Title: Active Learning for Coreference Resolution using Discrete Annotation
- Authors: Belinda Z. Li, Gabriel Stanovsky, Luke Zettlemoyer
- Abstract summary: We improve upon pairwise annotation for active learning in coreference resolution.
We ask annotators to identify mention antecedents if a presented mention pair is deemed not coreferent.
In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour.
- Score: 76.36423696634584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We improve upon pairwise annotation for active learning in coreference
resolution, by asking annotators to identify mention antecedents if a presented
mention pair is deemed not coreferent. This simple modification, when combined
with a novel mention clustering algorithm for selecting which examples to
label, is much more efficient in terms of the performance obtained per
annotation budget. In experiments with existing benchmark coreference datasets,
we show that the signal from this additional question leads to significant
performance gains per human-annotation hour. Future work can use our annotation
protocol to effectively develop coreference models for new domains. Our code is
publicly available at
https://github.com/belindal/discrete-active-learning-coref .
Related papers
- Vocabulary-Defined Semantics: Latent Space Clustering for Improving In-Context Learning [32.178931149612644]
In-context learning enables language models to adapt to downstream data or incorporate tasks by few samples as demonstrations within the prompts.
However, the performance of in-context learning can be unstable depending on the quality, format, or order of demonstrations.
We propose a novel approach "vocabulary-defined semantics"
arXiv Detail & Related papers (2024-01-29T14:29:48Z) - IDEAL: Influence-Driven Selective Annotations Empower In-Context
Learners in Large Language Models [66.32043210237768]
This paper introduces an influence-driven selective annotation method.
It aims to minimize annotation costs while improving the quality of in-context examples.
Experiments confirm the superiority of the proposed method on various benchmarks.
arXiv Detail & Related papers (2023-10-16T22:53:54Z) - Prefer to Classify: Improving Text Classifiers via Auxiliary Preference
Learning [76.43827771613127]
In this paper, we investigate task-specific preferences between pairs of input texts as a new alternative way for such auxiliary data annotation.
We propose a novel multi-task learning framework, called prefer-to-classify (P2C), which can enjoy the cooperative effect of learning both the given classification task and the auxiliary preferences.
arXiv Detail & Related papers (2023-06-08T04:04:47Z) - Extending an Event-type Ontology: Adding Verbs and Classes Using
Fine-tuned LLMs Suggestions [0.0]
We have investigated the use of advanced machine learning methods for pre-annotating data for a lexical extension task.
We have examined the correlation of the automatic scores with the human annotation.
While the correlation turned out to be strong, its influence on the annotation proper is modest due to its near linearity.
arXiv Detail & Related papers (2023-06-03T14:57:47Z) - Mention Annotations Alone Enable Efficient Domain Adaptation for
Coreference Resolution [8.08448832546021]
We show that annotating mentions alone is nearly twice as fast as annotating full coreference chains.
Our approach facilitates annotation-efficient transfer and results in a 7-14% improvement in average F1 without increasing annotator time.
arXiv Detail & Related papers (2022-10-14T07:57:27Z) - Annotation Error Detection: Analyzing the Past and Present for a More
Coherent Future [63.99570204416711]
We reimplement 18 methods for detecting potential annotation errors and evaluate them on 9 English datasets.
We define a uniform evaluation setup including a new formalization of the annotation error detection task.
We release our datasets and implementations in an easy-to-use and open source software package.
arXiv Detail & Related papers (2022-06-05T22:31:45Z) - Contrastive Test-Time Adaptation [83.73506803142693]
We propose a novel way to leverage self-supervised contrastive learning to facilitate target feature learning.
We produce pseudo labels online and refine them via soft voting among their nearest neighbors in the target feature space.
Our method, AdaContrast, achieves state-of-the-art performance on major benchmarks.
arXiv Detail & Related papers (2022-04-21T19:17:22Z) - Neighborhood Contrastive Learning for Novel Class Discovery [79.14767688903028]
We build a new framework, named Neighborhood Contrastive Learning, to learn discriminative representations that are important to clustering performance.
We experimentally demonstrate that these two ingredients significantly contribute to clustering performance and lead our model to outperform state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-06-20T17:34:55Z) - Adaptive Active Learning for Coreference Resolution [37.261220564076964]
Recent developments in incremental coreference resolution allow for a novel approach to active learning in this setting.
By lowering the data barrier for coreference, coreference resolvers can rapidly adapt to a series of previously unconsidered domains.
arXiv Detail & Related papers (2021-04-15T17:21:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.