IPED: An Implicit Perspective for Relational Triple Extraction based on
Diffusion Model
- URL: http://arxiv.org/abs/2403.00808v1
- Date: Sat, 24 Feb 2024 14:18:11 GMT
- Title: IPED: An Implicit Perspective for Relational Triple Extraction based on
Diffusion Model
- Authors: Jianli Zhao, Changhao Xu, Bin Jiang
- Abstract summary: Implicit Perspective for triple Extraction based on Diffusion model (IPED)
We propose an Implicit Perspective for triple Extraction based on Diffusion model (IPED)
Our solution adopts an implicit using block coverage to complete the tables, avoiding the limitations of explicit tagging methods.
- Score: 7.894136732348917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relational triple extraction is a fundamental task in the field of
information extraction, and a promising framework based on table filling has
recently gained attention as a potential baseline for entity relation
extraction. However, inherent shortcomings such as redundant information and
incomplete triple recognition remain problematic. To address these challenges,
we propose an Implicit Perspective for relational triple Extraction based on
Diffusion model (IPED), an innovative approach for extracting relational
triples. Our classifier-free solution adopts an implicit strategy using block
coverage to complete the tables, avoiding the limitations of explicit tagging
methods. Additionally, we introduce a generative model structure, the
block-denoising diffusion model, to collaborate with our implicit perspective
and effectively circumvent redundant information disruptions. Experimental
results on two popular datasets demonstrate that IPED achieves state-of-the-art
performance while gaining superior inference speed and low computational
complexity. To support future research, we have made our source code publicly
available online.
Related papers
- BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based
Joint Relational Triple Extraction Framework [16.930809038479666]
We propose BitCoin, an innovative Bidirectional tagging and supervised Contrastive learning based joint relational triple extraction framework.
Specifically, we design a supervised contrastive learning method that considers multiple positives per anchor rather than restricting it to just one positive.
Our framework implements taggers in two directions, enabling triples extraction from subject to object and object to subject.
arXiv Detail & Related papers (2023-09-21T07:55:54Z) - CARE: Co-Attention Network for Joint Entity and Relation Extraction [0.0]
We propose a Co-Attention network for joint entity and relation extraction.
Our approach includes adopting a parallel encoding strategy to learn separate representations for each subtask.
At the core of our approach is the co-attention module that captures two-way interaction between the two subtasks.
arXiv Detail & Related papers (2023-08-24T03:40:54Z) - A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach [59.89749342550104]
We propose the task of hyper-relational extraction to extract more specific and complete facts from text.
Existing models cannot perform hyper-relational extraction as it requires a model to consider the interaction between three entities.
We propose CubeRE, a cube-filling model inspired by table-filling approaches and explicitly considers the interaction between relation triplets and qualifiers.
arXiv Detail & Related papers (2022-11-18T03:51:28Z) - Towards Realistic Low-resource Relation Extraction: A Benchmark with
Empirical Baseline Study [51.33182775762785]
This paper presents an empirical study to build relation extraction systems in low-resource settings.
We investigate three schemes to evaluate the performance in low-resource settings: (i) different types of prompt-based methods with few-shot labeled data; (ii) diverse balancing methods to address the long-tailed distribution issue; and (iii) data augmentation technologies and self-training to generate more labeled in-domain data.
arXiv Detail & Related papers (2022-10-19T15:46:37Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - Adjacency List Oriented Relational Fact Extraction via Adaptive
Multi-task Learning [24.77542721790553]
We show that all of the fact extraction models can be organized according to a graph-oriented analytical perspective.
An efficient model, aDjacency lIst oRientational faCT (Direct), is proposed based on this analytical framework.
arXiv Detail & Related papers (2021-06-03T02:57:08Z) - Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot
Relational Triple Extraction [40.00702385889112]
We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples.
We design a hybrid learning mechanism that bridges text and knowledge concerning both entities and relations.
Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction.
arXiv Detail & Related papers (2020-10-30T04:18:39Z) - Contrastive Triple Extraction with Generative Transformer [72.21467482853232]
We introduce a novel model, contrastive triple extraction with a generative transformer.
Specifically, we introduce a single shared transformer module for encoder-decoder-based generation.
To generate faithful results, we propose a novel triplet contrastive training object.
arXiv Detail & Related papers (2020-09-14T05:29:24Z) - HittER: Hierarchical Transformers for Knowledge Graph Embeddings [85.93509934018499]
We propose Hitt to learn representations of entities and relations in a complex knowledge graph.
Experimental results show that Hitt achieves new state-of-the-art results on multiple link prediction.
We additionally propose a simple approach to integrate Hitt into BERT and demonstrate its effectiveness on two Freebase factoid answering datasets.
arXiv Detail & Related papers (2020-08-28T18:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.