Mention Annotations Alone Enable Efficient Domain Adaptation for
Coreference Resolution
- URL: http://arxiv.org/abs/2210.07602v2
- Date: Tue, 30 May 2023 22:07:45 GMT
- Title: Mention Annotations Alone Enable Efficient Domain Adaptation for
Coreference Resolution
- Authors: Nupoor Gandhi, Anjalie Field, Emma Strubell
- Abstract summary: We show that annotating mentions alone is nearly twice as fast as annotating full coreference chains.
Our approach facilitates annotation-efficient transfer and results in a 7-14% improvement in average F1 without increasing annotator time.
- Score: 8.08448832546021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although recent neural models for coreference resolution have led to
substantial improvements on benchmark datasets, transferring these models to
new target domains containing out-of-vocabulary spans and requiring differing
annotation schemes remains challenging. Typical approaches involve continued
training on annotated target-domain data, but obtaining annotations is costly
and time-consuming. We show that annotating mentions alone is nearly twice as
fast as annotating full coreference chains. Accordingly, we propose a method
for efficiently adapting coreference models, which includes a high-precision
mention detection objective and requires annotating only mentions in the target
domain. Extensive evaluation across three English coreference datasets:
CoNLL-2012 (news/conversation), i2b2/VA (medical notes), and previously
unstudied child welfare notes, reveals that our approach facilitates
annotation-efficient transfer and results in a 7-14% improvement in average F1
without increasing annotator time.
Related papers
- Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Holistic Transfer: Towards Non-Disruptive Fine-Tuning with Partial
Target Data [32.91362206231936]
We propose a learning problem involving adapting a pre-trained source model to the target domain for classifying all classes that appeared in the source data.
This problem is practical, as it is unrealistic for the target end-users to collect data for all classes prior to adaptation.
We present several effective solutions that maintain the accuracy of the missing classes and enhance the overall performance.
arXiv Detail & Related papers (2023-11-02T17:35:16Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - AcroFOD: An Adaptive Method for Cross-domain Few-shot Object Detection [59.10314662986463]
Cross-domain few-shot object detection aims to adapt object detectors in the target domain with a few annotated target data.
The proposed method achieves state-of-the-art performance on multiple benchmarks.
arXiv Detail & Related papers (2022-09-22T10:23:40Z) - Improving Span Representation for Domain-adapted Coreference Resolution [19.826381727568222]
We propose the use of concept knowledge to more efficiently adapt coreference models to a new domain.
We show that incorporating knowledge with end-to-end coreference models results in better performance on the most challenging, domain-specific spans.
arXiv Detail & Related papers (2021-09-20T19:41:31Z) - Adaptive Active Learning for Coreference Resolution [37.261220564076964]
Recent developments in incremental coreference resolution allow for a novel approach to active learning in this setting.
By lowering the data barrier for coreference, coreference resolvers can rapidly adapt to a series of previously unconsidered domains.
arXiv Detail & Related papers (2021-04-15T17:21:51Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Active Learning for Coreference Resolution using Discrete Annotation [76.36423696634584]
We improve upon pairwise annotation for active learning in coreference resolution.
We ask annotators to identify mention antecedents if a presented mention pair is deemed not coreferent.
In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour.
arXiv Detail & Related papers (2020-04-28T17:17:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.