Unsupervised Relation Extraction from Language Models using Constrained
Cloze Completion
- URL: http://arxiv.org/abs/2010.06804v1
- Date: Wed, 14 Oct 2020 04:21:57 GMT
- Title: Unsupervised Relation Extraction from Language Models using Constrained
Cloze Completion
- Authors: Ankur Goswami, Akshata Bhat, Hadar Ohana, Theodoros Rekatsinas
- Abstract summary: We show that state-of-the-art self-supervised language models can be readily used to extract relations from a corpus without the need to train a fine-tuned extractive head.
We introduce RE-Flex, a simple framework that performs constrained cloze completion over pretrained language models to perform unsupervised relation extraction.
- Score: 7.9850810440877975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that state-of-the-art self-supervised language models can be readily
used to extract relations from a corpus without the need to train a fine-tuned
extractive head. We introduce RE-Flex, a simple framework that performs
constrained cloze completion over pretrained language models to perform
unsupervised relation extraction. RE-Flex uses contextual matching to ensure
that language model predictions matches supporting evidence from the input
corpus that is relevant to a target relation. We perform an extensive
experimental study over multiple relation extraction benchmarks and demonstrate
that RE-Flex outperforms competing unsupervised relation extraction methods
based on pretrained language models by up to 27.8 $F_1$ points compared to the
next-best method. Our results show that constrained inference queries against a
language model can enable accurate unsupervised relation extraction.
Related papers
- Improving Recall of Large Language Models: A Model Collaboration Approach for Relational Triple Extraction [44.716502690026026]
Relation triple extraction, which outputs a set of triples from long sentences, plays a vital role in knowledge acquisition.
Large language models can accurately extract triples from simple sentences through few-shot learning or fine-tuning when given appropriate instructions.
In this paper, we design an evaluation-filtering framework that integrates large language models with small models for relational triple extraction tasks.
arXiv Detail & Related papers (2024-04-15T09:03:05Z) - Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation
Extraction [15.553367375330843]
We propose a novel approach for few-shot relation extraction using large language models.
CoT-ER first induces large language models to generate evidences using task-specific and concept-level knowledge.
arXiv Detail & Related papers (2023-11-10T08:12:00Z) - RAVEN: In-Context Learning with Retrieval-Augmented Encoder-Decoder Language Models [57.12888828853409]
RAVEN is a model that combines retrieval-augmented masked language modeling and prefix language modeling.
Fusion-in-Context Learning enables the model to leverage more in-context examples without requiring additional training.
Our work underscores the potential of retrieval-augmented encoder-decoder language models for in-context learning.
arXiv Detail & Related papers (2023-08-15T17:59:18Z) - How to Unleash the Power of Large Language Models for Few-shot Relation
Extraction? [28.413620806193165]
In this paper, we investigate principal methodologies, in-context learning and data generation, for few-shot relation extraction via GPT-3.5.
We observe that in-context learning can achieve performance on par with previous prompt learning approaches, and data generation with the large language model can boost previous solutions to obtain new state-of-the-art few-shot results.
arXiv Detail & Related papers (2023-05-02T15:55:41Z) - mFACE: Multilingual Summarization with Factual Consistency Evaluation [79.60172087719356]
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets.
Despite promising results, current models still suffer from generating factually inconsistent summaries.
We leverage factual consistency evaluation models to improve multilingual summarization.
arXiv Detail & Related papers (2022-12-20T19:52:41Z) - PCRED: Zero-shot Relation Triplet Extraction with Potential Candidate
Relation Selection and Entity Boundary Detection [11.274924966891842]
Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts.
Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples.
We tackle this task from a new perspective and propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation selection and Entity boundary Detection.
arXiv Detail & Related papers (2022-11-26T04:27:31Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - SelfORE: Self-supervised Relational Feature Learning for Open Relation
Extraction [60.08464995629325]
Open-domain relation extraction is the task of extracting open-domain relation facts from natural language sentences.
We proposed a self-supervised framework named SelfORE, which exploits weak, self-supervised signals.
Experimental results on three datasets show the effectiveness and robustness of SelfORE.
arXiv Detail & Related papers (2020-04-06T07:23:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.