Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt
Tuning
- URL: http://arxiv.org/abs/2205.02355v2
- Date: Tue, 19 Sep 2023 12:21:53 GMT
- Title: Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt
Tuning
- Authors: Xiang Chen, Lei Li, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si,
Huajun Chen
- Abstract summary: We propose a new semiparametric paradigm of retrieval-enhanced prompt tuning for relation extraction.
Our model infers relation through knowledge stored in the weights during training.
Our method can achieve state-of-the-art in both standard supervised and few-shot settings.
- Score: 109.7767515627765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained language models have contributed significantly to relation
extraction by demonstrating remarkable few-shot learning abilities. However,
prompt tuning methods for relation extraction may still fail to generalize to
those rare or hard patterns. Note that the previous parametric learning
paradigm can be viewed as memorization regarding training data as a book and
inference as the close-book test. Those long-tailed or hard patterns can hardly
be memorized in parameters given few-shot instances. To this end, we regard RE
as an open-book examination and propose a new semiparametric paradigm of
retrieval-enhanced prompt tuning for relation extraction. We construct an
open-book datastore for retrieval regarding prompt-based instance
representations and corresponding relation labels as memorized key-value pairs.
During inference, the model can infer relations by linearly interpolating the
base output of PLM with the non-parametric nearest neighbor distribution over
the datastore. In this way, our model not only infers relation through
knowledge stored in the weights during training but also assists
decision-making by unwinding and querying examples in the open-book datastore.
Extensive experiments on benchmark datasets show that our method can achieve
state-of-the-art in both standard supervised and few-shot settings. Code are
available in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.
Related papers
- PromptORE -- A Novel Approach Towards Fully Unsupervised Relation
Extraction [0.0]
Unsupervised Relation Extraction (RE) aims to identify relations between entities in text, without having access to labeled data during training.
We propose PromptORE, a ''Prompt-based Open Relation Extraction'' model.
We adapt the novel prompt-tuning paradigm to work in an unsupervised setting, and use it to embed sentences expressing a relation.
We show that PromptORE consistently outperforms state-of-the-art models with a relative gain of more than 40% in B 3, V-measure and ARI.
arXiv Detail & Related papers (2023-03-24T12:55:35Z) - Decoupling Knowledge from Memorization: Retrieval-augmented Prompt
Learning [113.58691755215663]
We develop RetroPrompt to help a model strike a balance between generalization and memorization.
In contrast with vanilla prompt learning, RetroPrompt constructs an open-book knowledge-store from training instances.
Extensive experiments demonstrate that RetroPrompt can obtain better performance in both few-shot and zero-shot settings.
arXiv Detail & Related papers (2022-05-29T16:07:30Z) - Does Recommend-Revise Produce Reliable Annotations? An Analysis on
Missing Instances in DocRED [60.39125850987604]
We show that a textit-revise scheme results in false negative samples and an obvious bias towards popular entities and relations.
The relabeled dataset is released to serve as a more reliable test set of document RE models.
arXiv Detail & Related papers (2022-04-17T11:29:01Z) - BatchFormer: Learning to Explore Sample Relationships for Robust
Representation Learning [93.38239238988719]
We propose to enable deep neural networks with the ability to learn the sample relationships from each mini-batch.
BatchFormer is applied into the batch dimension of each mini-batch to implicitly explore sample relationships during training.
We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications.
arXiv Detail & Related papers (2022-03-03T05:31:33Z) - An Empirical Study on Few-shot Knowledge Probing for Pretrained Language
Models [54.74525882974022]
We show that few-shot examples can strongly boost the probing performance for both 1-hop and 2-hop relations.
In particular, we find that a simple-yet-effective approach of finetuning the bias vectors in the model outperforms existing prompt-engineering methods.
arXiv Detail & Related papers (2021-09-06T23:29:36Z) - Deep Indexed Active Learning for Matching Heterogeneous Entity
Representations [20.15233789156307]
We propose DIAL, a scalable active learning approach that jointly learns embeddings to maximize recall for blocking and accuracy for matching blocked pairs.
Experiments on five benchmark datasets and a multilingual record matching dataset show the effectiveness of our approach in terms of precision, recall and running time.
arXiv Detail & Related papers (2021-04-08T18:00:19Z) - Bootstrapping Relation Extractors using Syntactic Search by Examples [47.11932446745022]
We propose a process for bootstrapping training datasets which can be performed quickly by non-NLP-experts.
We take advantage of search engines over syntactic-graphs which expose a friendly by-example syntax.
We show that the resulting models are competitive with models trained on manually annotated data and on data obtained from distant supervision.
arXiv Detail & Related papers (2021-02-09T18:17:59Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.