PCRED: Zero-shot Relation Triplet Extraction with Potential Candidate
Relation Selection and Entity Boundary Detection
- URL: http://arxiv.org/abs/2211.14477v1
- Date: Sat, 26 Nov 2022 04:27:31 GMT
- Title: PCRED: Zero-shot Relation Triplet Extraction with Potential Candidate
Relation Selection and Entity Boundary Detection
- Authors: Yuquan Lan, Dongxu Li, Hui Zhao, Gang Zhao
- Abstract summary: Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts.
Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples.
We tackle this task from a new perspective and propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation selection and Entity boundary Detection.
- Score: 11.274924966891842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation
triplets from unstructured texts, while the relation sets at the training and
testing stages are disjoint. Previous state-of-the-art method handles this
challenging task by leveraging pretrained language models to generate data as
additional training samples, which increases the training cost and severely
constrains the model performance. We tackle this task from a new perspective
and propose a novel method named PCRED for ZeroRTE with Potential Candidate
Relation selection and Entity boundary Detection. The model adopts a
relation-first paradigm, which firstly recognizes unseen relations through
candidate relation selection. By this approach, the semantics of relations are
naturally infused in the context. Entities are extracted based on the context
and the semantics of relations subsequently. We evaluate our model on two
ZeroRTE datasets. The experiment result shows that our method consistently
outperforms previous works. Besides, our model does not rely on any additional
data, which boasts the advantages of simplicity and effectiveness. Our code is
available at https://anonymous.4open.science/r/PCRED.
Related papers
- Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation
Extraction [15.553367375330843]
We propose a novel approach for few-shot relation extraction using large language models.
CoT-ER first induces large language models to generate evidences using task-specific and concept-level knowledge.
arXiv Detail & Related papers (2023-11-10T08:12:00Z) - Zero-shot Triplet Extraction by Template Infilling [13.295751492744081]
Triplet extraction aims to extract pairs of entities and their corresponding relations from unstructured text.
We show that by reducing triplet extraction to a template infilling task over a pre-trained language model (LM), we can equip the extraction model with zero-shot learning capabilities.
We propose a novel framework, ZETT, that aligns the task objective to the pre-training objective of generative transformers to generalize to unseen relations.
arXiv Detail & Related papers (2022-12-21T00:57:24Z) - RelationPrompt: Leveraging Prompts to Generate Synthetic Data for
Zero-Shot Relation Triplet Extraction [65.4337085607711]
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE)
Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
We propose to synthesize relation examples by prompting language models to generate structured texts.
arXiv Detail & Related papers (2022-03-17T05:55:14Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - TDRE: A Tensor Decomposition Based Approach for Relation Extraction [6.726803950083593]
Extracting entity pairs along with relation types from unstructured texts is a fundamental subtask of information extraction.
In this paper, we first model the final triplet extraction result as a three-order tensor of word-to-word pairs enriched with each relation type.
The proposed method outperforms existing strong baselines.
arXiv Detail & Related papers (2020-10-15T05:29:34Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - Relation of the Relations: A New Paradigm of the Relation Extraction
Problem [52.21210549224131]
We propose a new paradigm of Relation Extraction (RE) that considers as a whole the predictions of all relations in the same context.
We develop a data-driven approach that does not require hand-crafted rules but learns by itself the relation of relations (RoR) using Graph Neural Networks and a relation matrix transformer.
Experiments show that our model outperforms the state-of-the-art approaches by +1.12% on the ACE05 dataset and +2.55% on SemEval 2018 Task 7.2.
arXiv Detail & Related papers (2020-06-05T22:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.