A Read-and-Select Framework for Zero-shot Entity Linking
- URL: http://arxiv.org/abs/2310.12450v2
- Date: Sun, 29 Oct 2023 14:58:54 GMT
- Title: A Read-and-Select Framework for Zero-shot Entity Linking
- Authors: Zhenran Xu, Yulin Chen, Baotian Hu, Min Zhang
- Abstract summary: We propose a read-and-select (ReS) framework by modeling the main components of entity disambiguation.
Our method achieves the state-of-the-art performance on the established zero-shot entity linking dataset ZESHEL with a 2.55% micro-average accuracy gain.
- Score: 33.15662306409253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Zero-shot entity linking (EL) aims at aligning entity mentions to unseen
entities to challenge the generalization ability. Previous methods largely
focus on the candidate retrieval stage and ignore the essential candidate
ranking stage, which disambiguates among entities and makes the final linking
prediction. In this paper, we propose a read-and-select (ReS) framework by
modeling the main components of entity disambiguation, i.e., mention-entity
matching and cross-entity comparison. First, for each candidate, the reading
module leverages mention context to output mention-aware entity
representations, enabling mention-entity matching. Then, in the selecting
module, we frame the choice of candidates as a sequence labeling problem, and
all candidate representations are fused together to enable cross-entity
comparison. Our method achieves the state-of-the-art performance on the
established zero-shot EL dataset ZESHEL with a 2.55% micro-average accuracy
gain, with no need for laborious multi-phase pre-training used in most of the
previous work, showing the effectiveness of both mention-entity and
cross-entity interaction.
Related papers
- OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model Prompting [49.655711022673046]
OneNet is an innovative framework that utilizes the few-shot learning capabilities of Large Language Models (LLMs) without the need for fine-tuning.
OneNet is structured around three key components prompted by LLMs: (1) an entity reduction processor that simplifies inputs by summarizing and filtering out irrelevant entities, (2) a dual-perspective entity linker that combines contextual cues and prior knowledge for precise entity linking, and (3) an entity consensus judger that employs a unique consistency algorithm to alleviate the hallucination in the entity linking reasoning.
arXiv Detail & Related papers (2024-10-10T02:45:23Z) - Entity Disambiguation via Fusion Entity Decoding [68.77265315142296]
We propose an encoder-decoder model to disambiguate entities with more detailed entity descriptions.
We observe +1.5% improvements in end-to-end entity linking in the GERBIL benchmark compared with EntQA.
arXiv Detail & Related papers (2024-04-02T04:27:54Z) - Entity Alignment with Unlabeled Dangling Cases [49.86384156476041]
We propose a novel GNN-based dangling detection and entity alignment framework.
While the two tasks share the same GNN, the detected dangling entities are removed in the alignment.
Our framework is featured by a designed entity and relation attention mechanism for selective neighborhood aggregation in representation learning.
arXiv Detail & Related papers (2024-03-16T17:21:58Z) - Coherent Entity Disambiguation via Modeling Topic and Categorical
Dependency [87.16283281290053]
Previous entity disambiguation (ED) methods adopt a discriminative paradigm, where prediction is made based on matching scores between mention context and candidate entities.
We propose CoherentED, an ED system equipped with novel designs aimed at enhancing the coherence of entity predictions.
We achieve new state-of-the-art results on popular ED benchmarks, with an average improvement of 1.3 F1 points.
arXiv Detail & Related papers (2023-11-06T16:40:13Z) - IDEAL: Influence-Driven Selective Annotations Empower In-Context
Learners in Large Language Models [66.32043210237768]
This paper introduces an influence-driven selective annotation method.
It aims to minimize annotation costs while improving the quality of in-context examples.
Experiments confirm the superiority of the proposed method on various benchmarks.
arXiv Detail & Related papers (2023-10-16T22:53:54Z) - Proxy-based Zero-Shot Entity Linking by Effective Candidate Retrieval [3.1498833540989413]
We show that pairing a proxy-based metric learning loss with an adversarial regularizer provides an efficient alternative to hard negative sampling in the candidate retrieval stage.
In particular, we show competitive performance on the recall@1 metric, thereby providing the option to leave out the expensive candidate ranking step.
arXiv Detail & Related papers (2023-01-30T22:43:21Z) - Focus on what matters: Applying Discourse Coherence Theory to Cross
Document Coreference [22.497877069528087]
Event and entity coreference resolution across documents vastly increases the number of candidate mentions, making it intractable to do the full $n2$ pairwise comparisons.
Existing approaches simplify by considering coreference only within document clusters, but this fails to handle inter-cluster coreference.
We draw on an insight from discourse coherence theory: potential coreferences are constrained by the reader's discourse focus.
Our approach achieves state-of-the-art results for both events and entities on the ECB+, Gun Violence, Football Coreference, and Cross-Domain Cross-Document Coreference corpora.
arXiv Detail & Related papers (2021-10-11T15:41:47Z) - Towards Consistent Document-level Entity Linking: Joint Models for
Entity Linking and Coreference Resolution [15.265013409559227]
We consider the task of document-level entity linking (EL)
We propose to join the EL task with that of coreference resolution (coref)
arXiv Detail & Related papers (2021-08-30T21:46:12Z) - Entity Linking via Dual and Cross-Attention Encoders [16.23946458604865]
We propose a dual-encoder entity retrieval system that learns mention and entity representations in the same space.
We then rerank the entities by using a cross-attention encoder over the target mention and each of the candidate entities.
We achieve state-of-art results on TACKBP-2010 dataset, with 92.05% accuracy.
arXiv Detail & Related papers (2020-04-07T17:28:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.