Free the Plural: Unrestricted Split-Antecedent Anaphora Resolution
- URL: http://arxiv.org/abs/2011.00245v1
- Date: Sat, 31 Oct 2020 11:21:39 GMT
- Title: Free the Plural: Unrestricted Split-Antecedent Anaphora Resolution
- Authors: Juntao Yu, Nafise Sadat Moosavi, Silviu Paun and Massimo Poesio
- Abstract summary: We introduce the first model for unrestricted resolution of split-antecedent anaphors.
We show that we can substantially improve its performance by addressing the sparsity issue.
Evaluation on the gold annotated ARRAU corpus shows that the out best model uses a combination of three auxiliary corpora.
- Score: 23.843305521306227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Now that the performance of coreference resolvers on the simpler forms of
anaphoric reference has greatly improved, more attention is devoted to more
complex aspects of anaphora. One limitation of virtually all coreference
resolution models is the focus on single-antecedent anaphors. Plural anaphors
with multiple antecedents-so-called split-antecedent anaphors (as in John met
Mary. They went to the movies) have not been widely studied, because they are
not annotated in ONTONOTES and are relatively infrequent in other corpora. In
this paper, we introduce the first model for unrestricted resolution of
split-antecedent anaphors. We start with a strong baseline enhanced by BERT
embeddings, and show that we can substantially improve its performance by
addressing the sparsity issue. To do this, we experiment with auxiliary corpora
where split-antecedent anaphors were annotated by the crowd, and with transfer
learning models using element-of bridging references and single-antecedent
coreference as auxiliary tasks. Evaluation on the gold annotated ARRAU corpus
shows that the out best model uses a combination of three auxiliary corpora
achieved F1 scores of 70% and 43.6% when evaluated in a lenient and strict
setting, respectively, i.e., 11 and 21 percentage points gain when compared
with our baseline.
Related papers
- ASTE Transformer Modelling Dependencies in Aspect-Sentiment Triplet Extraction [2.07180164747172]
Aspect-Sentiment Triplet Extraction (ASTE) is a recently proposed task that consists in extracting (aspect phrase, opinion phrase, sentiment polarity) triples from a given sentence.
Recent state-of-the-art methods approach this task by first extracting all possible spans from a given sentence.
arXiv Detail & Related papers (2024-09-23T16:49:47Z) - SPLICE: A Singleton-Enhanced PipeLIne for Coreference REsolution [11.062090350704617]
Singleton mentions, i.e.entities mentioned only once in a text, are important to how humans understand discourse from a theoretical perspective.
Previous attempts to incorporate their detection in end-to-end neural coreference resolution for English have been hampered by the lack of singleton mention spans in the OntoNotes benchmark.
This paper addresses this limitation by combining predicted mentions from existing nested NER systems and features derived from OntoNotes syntax trees.
arXiv Detail & Related papers (2024-03-25T22:46:16Z) - Sentiment Analysis through LLM Negotiations [58.67939611291001]
A standard paradigm for sentiment analysis is to rely on a singular LLM and makes the decision in a single round.
This paper introduces a multi-LLM negotiation framework for sentiment analysis.
arXiv Detail & Related papers (2023-11-03T12:35:29Z) - NapSS: Paragraph-level Medical Text Simplification via Narrative
Prompting and Sentence-matching Summarization [46.772517928718216]
We propose a summarize-then-simplify two-stage strategy, which we call NapSS.
NapSS identifies the relevant content to simplify while ensuring that the original narrative flow is preserved.
Our model achieves significantly better than the seq2seq baseline on an English medical corpus.
arXiv Detail & Related papers (2023-02-11T02:20:25Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - Entity Disambiguation with Entity Definitions [50.01142092276296]
Local models have recently attained astounding performances in Entity Disambiguation (ED)
Previous works limited their studies to using, as the textual representation of each candidate, only its Wikipedia title.
In this paper, we address this limitation and investigate to what extent more expressive textual representations can mitigate it.
We report a new state of the art on 2 out of 6 benchmarks we consider and strongly improve the generalization capability over unseen patterns.
arXiv Detail & Related papers (2022-10-11T17:46:28Z) - Scoring Coreference Chains with Split-Antecedent Anaphors [23.843305521306227]
We propose a solution to the technical problem of generalizing existing metrics for identity anaphora so that they can also be used to score cases of split-antecedents.
This is the first such proposal in the literature on anaphora or coreference, and has been successfully used to score both split-antecedent plural references and discourse deixis.
arXiv Detail & Related papers (2022-05-24T19:07:36Z) - Does BERT really agree ? Fine-grained Analysis of Lexical Dependence on
a Syntactic Task [70.29624135819884]
We study the extent to which BERT is able to perform lexically-independent subject-verb number agreement (NA) on targeted syntactic templates.
Our results on nonce sentences suggest that the model generalizes well for simple templates, but fails to perform lexically-independent syntactic generalization when as little as one attractor is present.
arXiv Detail & Related papers (2022-04-14T11:33:15Z) - Stay Together: A System for Single and Split-antecedent Anaphora
Resolution [19.98823717287972]
Split-antecedent anaphora is rarer and more complex to resolve than single-antecedent anaphora.
We introduce a system that resolves both single and split-antecedent anaphors, and evaluate it in a more realistic setting.
arXiv Detail & Related papers (2021-04-12T10:01:08Z) - When Hearst Is not Enough: Improving Hypernymy Detection from Corpus
with Distributional Models [59.46552488974247]
This paper addresses whether an is-a relationship exists between words (x, y) with the help of large textual corpora.
Recent studies suggest that pattern-based ones are superior, if large-scale Hearst pairs are extracted and fed, with the sparsity of unseen (x, y) pairs relieved.
For the first time, this paper quantifies the non-negligible existence of those specific cases. We also demonstrate that distributional methods are ideal to make up for pattern-based ones in such cases.
arXiv Detail & Related papers (2020-10-10T08:34:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.