PRGC: Potential Relation and Global Correspondence Based Joint
Relational Triple Extraction
- URL: http://arxiv.org/abs/2106.09895v1
- Date: Fri, 18 Jun 2021 03:38:07 GMT
- Title: PRGC: Potential Relation and Global Correspondence Based Joint
Relational Triple Extraction
- Authors: Hengyi Zheng, Rui Wen, Xi Chen, Yifan Yang, Yunyan Zhang, Ziheng
Zhang, Ningyu Zhang, Bin Qin, Ming Xu, Yefeng Zheng
- Abstract summary: We propose a joint relational triple extraction framework based on Potential Relation and Global Correspondence (PRGC)
PRGC achieves state-of-the-art performance on public benchmarks with higher efficiency and delivers consistent performance gain on complex scenarios of overlapping triples.
- Score: 23.998135821388203
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Joint extraction of entities and relations from unstructured texts is a
crucial task in information extraction. Recent methods achieve considerable
performance but still suffer from some inherent limitations, such as redundancy
of relation prediction, poor generalization of span-based extraction and
inefficiency. In this paper, we decompose this task into three subtasks,
Relation Judgement, Entity Extraction and Subject-object Alignment from a novel
perspective and then propose a joint relational triple extraction framework
based on Potential Relation and Global Correspondence (PRGC). Specifically, we
design a component to predict potential relations, which constrains the
following entity extraction to the predicted relation subset rather than all
relations; then a relation-specific sequence tagging component is applied to
handle the overlapping problem between subjects and objects; finally, a global
correspondence component is designed to align the subject and object into a
triple with low-complexity. Extensive experiments show that PRGC achieves
state-of-the-art performance on public benchmarks with higher efficiency and
delivers consistent performance gain on complex scenarios of overlapping
triples.
Related papers
- Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Relation Extraction Model Based on Semantic Enhancement Mechanism [19.700119359495663]
CasAug model proposed in this paper based the CaselR framework combined with the enhancement mechanism.
The experimental results show that, compared with the baseline model, the CasAug model proposed in this paper has improved the effect of relation extraction.
arXiv Detail & Related papers (2023-11-05T04:40:39Z) - BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based
Joint Relational Triple Extraction Framework [16.930809038479666]
We propose BitCoin, an innovative Bidirectional tagging and supervised Contrastive learning based joint relational triple extraction framework.
Specifically, we design a supervised contrastive learning method that considers multiple positives per anchor rather than restricting it to just one positive.
Our framework implements taggers in two directions, enabling triples extraction from subject to object and object to subject.
arXiv Detail & Related papers (2023-09-21T07:55:54Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - CARE: Co-Attention Network for Joint Entity and Relation Extraction [0.0]
We propose a Co-Attention network for joint entity and relation extraction.
Our approach includes adopting a parallel encoding strategy to learn separate representations for each subtask.
At the core of our approach is the co-attention module that captures two-way interaction between the two subtasks.
arXiv Detail & Related papers (2023-08-24T03:40:54Z) - ReSel: N-ary Relation Extraction from Scientific Text and Tables by
Learning to Retrieve and Select [53.071352033539526]
We study the problem of extracting N-ary relations from scientific articles.
Our proposed method ReSel decomposes this task into a two-stage procedure.
Our experiments on three scientific information extraction datasets show that ReSel outperforms state-of-the-art baselines significantly.
arXiv Detail & Related papers (2022-10-26T02:28:02Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - RelationPrompt: Leveraging Prompts to Generate Synthetic Data for
Zero-Shot Relation Triplet Extraction [65.4337085607711]
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE)
Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
We propose to synthesize relation examples by prompting language models to generate structured texts.
arXiv Detail & Related papers (2022-03-17T05:55:14Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot
Relational Triple Extraction [40.00702385889112]
We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples.
We design a hybrid learning mechanism that bridges text and knowledge concerning both entities and relations.
Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction.
arXiv Detail & Related papers (2020-10-30T04:18:39Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.