Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity Linking
- URL: http://arxiv.org/abs/2306.12245v4
- Date: Wed, 20 Mar 2024 03:51:23 GMT
- Title: Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity Linking
- Authors: Yinghui Li, Yong Jiang, Yangning Li, Xingyu Lu, Pengjun Xie, Ying Shen, Hai-Tao Zheng,
- Abstract summary: We propose BEER$2$, a Bidirectional End-to-End training framework for Retriever and Reader.
Through our designed bidirectional end-to-end training, BEER$2$ guides the retriever and the reader to learn from each other, make progress together, and ultimately improve EL performance.
- Score: 57.44361768117688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity Linking (EL) is a fundamental task for Information Extraction and Knowledge Graphs. The general form of EL (i.e., end-to-end EL) aims to first find mentions in the given input document and then link the mentions to corresponding entities in a specific knowledge base. Recently, the paradigm of retriever-reader promotes the progress of end-to-end EL, benefiting from the advantages of dense entity retrieval and machine reading comprehension. However, the existing study only trains the retriever and the reader separately in a pipeline manner, which ignores the benefit that the interaction between the retriever and the reader can bring to the task. To advance the retriever-reader paradigm to perform more perfectly on end-to-end EL, we propose BEER$^2$, a Bidirectional End-to-End training framework for Retriever and Reader. Through our designed bidirectional end-to-end training, BEER$^2$ guides the retriever and the reader to learn from each other, make progress together, and ultimately improve EL performance. Extensive experiments on benchmarks of multiple domains demonstrate the effectiveness of our proposed BEER$^2$.
Related papers
- ReLiK: Retrieve and LinK, Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget [43.35593460866504]
We propose a Retriever-Reader architecture for Entity Linking (EL) and Relation Extraction (RE)
We put forward an innovative input representation that incorporates the candidate entities or relations alongside the text.
Our formulation of EL and RE achieves state-of-the-art performance in both in-domain and out-of-domain benchmarks.
arXiv Detail & Related papers (2024-07-31T18:25:49Z) - Query Rewriting for Retrieval-Augmented Large Language Models [139.242907155883]
Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline.
This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs.
arXiv Detail & Related papers (2023-05-23T17:27:50Z) - Retrieval as Attention: End-to-end Learning of Retrieval and Reading
within a Single Transformer [80.50327229467993]
We show that a single model trained end-to-end can achieve both competitive retrieval and QA performance.
We show that end-to-end adaptation significantly boosts its performance on out-of-domain datasets in both supervised and unsupervised settings.
arXiv Detail & Related papers (2022-12-05T04:51:21Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z) - End-to-End Training of Multi-Document Reader and Retriever for
Open-Domain Question Answering [36.80395759543162]
We present an end-to-end differentiable training method for retrieval-augmented open-domain question answering systems.
We model retrieval decisions as latent variables over sets of relevant documents.
Our proposed method outperforms all existing approaches of comparable size by 2-3% exact match points.
arXiv Detail & Related papers (2021-06-09T19:25:37Z) - Is Retriever Merely an Approximator of Reader? [27.306407064073177]
We show that the reader and the retriever are complementary to each other even in terms of accuracy only.
We propose to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit.
arXiv Detail & Related papers (2020-10-21T13:40:15Z) - Open-Domain Question Answering with Pre-Constructed Question Spaces [70.13619499853756]
Open-domain question answering aims at solving the task of locating the answers to user-generated questions in massive collections of documents.
There are two families of solutions available: retriever-readers, and knowledge-graph-based approaches.
We propose a novel algorithm with a reader-retriever structure that differs from both families.
arXiv Detail & Related papers (2020-06-02T04:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.