End-to-End Trainable Retrieval-Augmented Generation for Relation Extraction
- URL: http://arxiv.org/abs/2406.03790v2
- Date: Thu, 10 Oct 2024 07:36:23 GMT
- Title: End-to-End Trainable Retrieval-Augmented Generation for Relation Extraction
- Authors: Kohei Makino, Makoto Miwa, Yutaka Sasaki,
- Abstract summary: We propose a novel End-to-end Trainable Retrieval-Augmented Generation (ETRAG)
ETRAG allows end-to-end optimization of the entire model, including the retriever, for the relation extraction objective.
We evaluate the relation extraction performance of ETRAG on the TACRED dataset, which is a standard benchmark for relation extraction.
- Score: 7.613942320502336
- License:
- Abstract: This paper addresses a crucial challenge in retrieval-augmented generation-based relation extractors; the end-to-end training is not applicable to conventional retrieval-augmented generation due to the non-differentiable nature of instance retrieval. This problem prevents the instance retrievers from being optimized for the relation extraction task, and conventionally it must be trained with an objective different from that for relation extraction. To address this issue, we propose a novel End-to-end Trainable Retrieval-Augmented Generation (ETRAG), which allows end-to-end optimization of the entire model, including the retriever, for the relation extraction objective by utilizing a differentiable selection of the $k$ nearest instances. We evaluate the relation extraction performance of ETRAG on the TACRED dataset, which is a standard benchmark for relation extraction. ETRAG demonstrates consistent improvements against the baseline model as retrieved instances are added. Furthermore, the analysis of instances retrieved by the end-to-end trained retriever confirms that the retrieved instances contain common relation labels or entities with the query and are specialized for the target task. Our findings provide a promising foundation for future research on retrieval-augmented generation and the broader applications of text generation in Natural Language Processing.
Related papers
- Learning to Retrieve Iteratively for In-Context Learning [56.40100968649039]
iterative retrieval is a novel framework that empowers retrievers to make iterative decisions through policy optimization.
We instantiate an iterative retriever for composing in-context learning exemplars and apply it to various semantic parsing tasks.
By adding only 4M additional parameters for state encoding, we convert an off-the-shelf dense retriever into a stateful iterative retriever.
arXiv Detail & Related papers (2024-06-20T21:07:55Z) - Dense X Retrieval: What Retrieval Granularity Should We Use? [56.90827473115201]
Often-overlooked design choice is the retrieval unit in which the corpus is indexed, e.g. document, passage, or sentence.
We introduce a novel retrieval unit, proposition, for dense retrieval.
Experiments reveal that indexing a corpus by fine-grained units such as propositions significantly outperforms passage-level units in retrieval tasks.
arXiv Detail & Related papers (2023-12-11T18:57:35Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - PromptRE: Weakly-Supervised Document-Level Relation Extraction via
Prompting-Based Data Programming [30.597623178206874]
We propose PromptRE, a novel weakly-supervised document-level relation extraction method.
PromptRE incorporates the label distribution and entity types as prior knowledge to improve the performance.
Experimental results on ReDocRED, a benchmark dataset for document-level relation extraction, demonstrate the superiority of PromptRE over baseline approaches.
arXiv Detail & Related papers (2023-10-13T17:23:17Z) - Recommender Systems with Generative Retrieval [58.454606442670034]
We propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates.
To that end, we create semantically meaningful of codewords to serve as a Semantic ID for each item.
We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets.
arXiv Detail & Related papers (2023-05-08T21:48:17Z) - EDeR: A Dataset for Exploring Dependency Relations Between Events [12.215649447070664]
We introduce the human-annotated Event Dependency Relation dataset (EDeR)
We show that recognizing this relation leads to more accurate event extraction.
We demonstrate that predicting the three-way classification into the required argument, optional argument or non-argument is a more challenging task.
arXiv Detail & Related papers (2023-04-04T08:07:07Z) - On-the-fly Text Retrieval for End-to-End ASR Adaptation [9.304386210911822]
We propose augmenting a transducer-based ASR model with a retrieval language model, which retrieves from an external text corpus plausible completions for a partial ASR hypothesis.
Our experiments show that the proposed model significantly improves the performance of a transducer baseline on a pair of question-answering datasets.
arXiv Detail & Related papers (2023-03-20T08:54:40Z) - AugTriever: Unsupervised Dense Retrieval and Domain Adaptation by Scalable Data Augmentation [44.93777271276723]
We propose two approaches that enable annotation-free and scalable training by creating pseudo querydocument pairs.
The query extraction method involves selecting salient spans from the original document to generate pseudo queries.
The transferred query generation method utilizes generation models trained for other NLP tasks, such as summarization, to produce pseudo queries.
arXiv Detail & Related papers (2022-12-17T10:43:25Z) - PCRED: Zero-shot Relation Triplet Extraction with Potential Candidate
Relation Selection and Entity Boundary Detection [11.274924966891842]
Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts.
Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples.
We tackle this task from a new perspective and propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation selection and Entity boundary Detection.
arXiv Detail & Related papers (2022-11-26T04:27:31Z) - DORE: Document Ordered Relation Extraction based on Generative Framework [56.537386636819626]
This paper investigates the root cause of the underwhelming performance of the existing generative DocRE models.
We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn.
Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
arXiv Detail & Related papers (2022-10-28T11:18:10Z) - Improving Multi-Turn Response Selection Models with Complementary
Last-Utterance Selection by Instance Weighting [84.9716460244444]
We consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals.
We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets.
arXiv Detail & Related papers (2020-02-18T06:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.