Relational Triple Extraction: One Step is Enough
- URL: http://arxiv.org/abs/2205.05270v1
- Date: Wed, 11 May 2022 05:09:14 GMT
- Title: Relational Triple Extraction: One Step is Enough
- Authors: Yu-Ming Shang, Heyan Huang, Xin Sun, Wei Wei, Xian-Ling Mao
- Abstract summary: We introduce a fresh perspective to revisit the triple extraction task, and propose a simple but effective model, named DirectRel.
Specifically, the proposed model first generates candidate entities through enumerating token sequences in a sentence, and then transforms the triple extraction task into a linking problem on a "head $rightarrow$ tail" bipartite graph.
- Score: 41.90858952418927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting relational triples from unstructured text is an essential task in
natural language processing and knowledge graph construction. Existing
approaches usually contain two fundamental steps: (1) finding the boundary
positions of head and tail entities; (2) concatenating specific tokens to form
triples. However, nearly all previous methods suffer from the problem of error
accumulation, i.e., the boundary recognition error of each entity in step (1)
will be accumulated into the final combined triples. To solve the problem, in
this paper, we introduce a fresh perspective to revisit the triple extraction
task, and propose a simple but effective model, named DirectRel. Specifically,
the proposed model first generates candidate entities through enumerating token
sequences in a sentence, and then transforms the triple extraction task into a
linking problem on a "head $\rightarrow$ tail" bipartite graph. By doing so,
all triples can be directly extracted in only one step. Extensive experimental
results on two widely used datasets demonstrate that the proposed model
performs better than the state-of-the-art baselines.
Related papers
- Prompt Based Tri-Channel Graph Convolution Neural Network for Aspect
Sentiment Triplet Extraction [63.0205418944714]
Aspect Sentiment Triplet Extraction (ASTE) is an emerging task to extract a given sentence's triplets, which consist of aspects, opinions, and sentiments.
Recent studies tend to address this task with a table-filling paradigm, wherein word relations are encoded in a two-dimensional table.
We propose a novel model for the ASTE task, called Prompt-based Tri-Channel Graph Convolution Neural Network (PT-GCN), which converts the relation table into a graph to explore more comprehensive relational information.
arXiv Detail & Related papers (2023-12-18T12:46:09Z) - BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based
Joint Relational Triple Extraction Framework [16.930809038479666]
We propose BitCoin, an innovative Bidirectional tagging and supervised Contrastive learning based joint relational triple extraction framework.
Specifically, we design a supervised contrastive learning method that considers multiple positives per anchor rather than restricting it to just one positive.
Our framework implements taggers in two directions, enabling triples extraction from subject to object and object to subject.
arXiv Detail & Related papers (2023-09-21T07:55:54Z) - Query-based Instance Discrimination Network for Relational Triple
Extraction [39.35417927570248]
Joint entity and relation extraction has been a core task in the field of information extraction.
Recent approaches usually consider the extraction of relational triples from a stereoscopic perspective.
We propose a novel query-based approach to construct instance-level representations for relational triples.
arXiv Detail & Related papers (2022-11-03T13:34:56Z) - RelationPrompt: Leveraging Prompts to Generate Synthetic Data for
Zero-Shot Relation Triplet Extraction [65.4337085607711]
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE)
Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
We propose to synthesize relation examples by prompting language models to generate structured texts.
arXiv Detail & Related papers (2022-03-17T05:55:14Z) - OneRel:Joint Entity and Relation Extraction with One Module in One Step [42.576188878294886]
Joint entity and relation extraction is an essential task in natural language processing and knowledge graph construction.
We propose a novel joint entity and relation extraction model, named OneRel, which casts joint extraction as a fine-grained triple classification problem.
arXiv Detail & Related papers (2022-03-10T15:09:59Z) - TDRE: A Tensor Decomposition Based Approach for Relation Extraction [6.726803950083593]
Extracting entity pairs along with relation types from unstructured texts is a fundamental subtask of information extraction.
In this paper, we first model the final triplet extraction result as a three-order tensor of word-to-word pairs enriched with each relation type.
The proposed method outperforms existing strong baselines.
arXiv Detail & Related papers (2020-10-15T05:29:34Z) - Position-Aware Tagging for Aspect Sentiment Triplet Extraction [37.76744150888183]
Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment.
Our observation is that the three elements within a triplet are highly related to each other, and this motivates us to build a joint model to extract such triplets.
We propose the first end-to-end model with a novel position-aware tagging scheme that is capable of jointly extracting the triplets.
arXiv Detail & Related papers (2020-10-06T10:40:34Z) - Contrastive Triple Extraction with Generative Transformer [72.21467482853232]
We introduce a novel model, contrastive triple extraction with a generative transformer.
Specifically, we introduce a single shared transformer module for encoder-decoder-based generation.
To generate faithful results, we propose a novel triplet contrastive training object.
arXiv Detail & Related papers (2020-09-14T05:29:24Z) - Pre-training for Abstractive Document Summarization by Reinstating
Source Text [105.77348528847337]
This paper presents three pre-training objectives which allow us to pre-train a Seq2Seq based abstractive summarization model on unlabeled text.
Experiments on two benchmark summarization datasets show that all three objectives can improve performance upon baselines.
arXiv Detail & Related papers (2020-04-04T05:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.