BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based
Joint Relational Triple Extraction Framework
- URL: http://arxiv.org/abs/2309.11853v1
- Date: Thu, 21 Sep 2023 07:55:54 GMT
- Title: BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based
Joint Relational Triple Extraction Framework
- Authors: Luyao He, Zhongbao Zhang, Sen Su, Yuxin Chen
- Abstract summary: We propose BitCoin, an innovative Bidirectional tagging and supervised Contrastive learning based joint relational triple extraction framework.
Specifically, we design a supervised contrastive learning method that considers multiple positives per anchor rather than restricting it to just one positive.
Our framework implements taggers in two directions, enabling triples extraction from subject to object and object to subject.
- Score: 16.930809038479666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relation triple extraction (RTE) is an essential task in information
extraction and knowledge graph construction. Despite recent advancements,
existing methods still exhibit certain limitations. They just employ
generalized pre-trained models and do not consider the specificity of RTE
tasks. Moreover, existing tagging-based approaches typically decompose the RTE
task into two subtasks, initially identifying subjects and subsequently
identifying objects and relations. They solely focus on extracting relational
triples from subject to object, neglecting that once the extraction of a
subject fails, it fails in extracting all triples associated with that subject.
To address these issues, we propose BitCoin, an innovative Bidirectional
tagging and supervised Contrastive learning based joint relational triple
extraction framework. Specifically, we design a supervised contrastive learning
method that considers multiple positives per anchor rather than restricting it
to just one positive. Furthermore, a penalty term is introduced to prevent
excessive similarity between the subject and object. Our framework implements
taggers in two directions, enabling triples extraction from subject to object
and object to subject. Experimental results show that BitCoin achieves
state-of-the-art results on the benchmark datasets and significantly improves
the F1 score on Normal, SEO, EPO, and multiple relation extraction tasks.
Related papers
- A Generalization Theory of Cross-Modality Distillation with Contrastive Learning [49.35244441141323]
Cross-modality distillation arises as an important topic for data modalities containing limited knowledge.
We formulate a general framework of cross-modality contrastive distillation (CMCD), built upon contrastive learning.
Our algorithm outperforms existing algorithms consistently by a margin of 2-3% across diverse modalities and tasks.
arXiv Detail & Related papers (2024-05-06T11:05:13Z) - Prompt Based Tri-Channel Graph Convolution Neural Network for Aspect
Sentiment Triplet Extraction [63.0205418944714]
Aspect Sentiment Triplet Extraction (ASTE) is an emerging task to extract a given sentence's triplets, which consist of aspects, opinions, and sentiments.
Recent studies tend to address this task with a table-filling paradigm, wherein word relations are encoded in a two-dimensional table.
We propose a novel model for the ASTE task, called Prompt-based Tri-Channel Graph Convolution Neural Network (PT-GCN), which converts the relation table into a graph to explore more comprehensive relational information.
arXiv Detail & Related papers (2023-12-18T12:46:09Z) - CARE: Co-Attention Network for Joint Entity and Relation Extraction [0.0]
We propose a Co-Attention network for joint entity and relation extraction.
Our approach includes adopting a parallel encoding strategy to learn separate representations for each subtask.
At the core of our approach is the co-attention module that captures two-way interaction between the two subtasks.
arXiv Detail & Related papers (2023-08-24T03:40:54Z) - ReSel: N-ary Relation Extraction from Scientific Text and Tables by
Learning to Retrieve and Select [53.071352033539526]
We study the problem of extracting N-ary relations from scientific articles.
Our proposed method ReSel decomposes this task into a two-stage procedure.
Our experiments on three scientific information extraction datasets show that ReSel outperforms state-of-the-art baselines significantly.
arXiv Detail & Related papers (2022-10-26T02:28:02Z) - OneRel:Joint Entity and Relation Extraction with One Module in One Step [42.576188878294886]
Joint entity and relation extraction is an essential task in natural language processing and knowledge graph construction.
We propose a novel joint entity and relation extraction model, named OneRel, which casts joint extraction as a fine-grained triple classification problem.
arXiv Detail & Related papers (2022-03-10T15:09:59Z) - A Simple but Effective Bidirectional Extraction Framework for Relational
Triple Extraction [0.9926500244448218]
Tagging based relational triple extraction methods are attracting growing research attention recently.
Most of these methods take a unidirectional extraction framework that first extracts all subjects and then extracts objects and relations simultaneously based on the subjects extracted.
This framework has an obvious deficiency that it is too sensitive to the extraction results of subjects.
We propose a bidirectional extraction framework based method that extracts triples based on the entity pairs extracted from two complementary directions.
arXiv Detail & Related papers (2021-12-09T14:17:33Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - PRGC: Potential Relation and Global Correspondence Based Joint
Relational Triple Extraction [23.998135821388203]
We propose a joint relational triple extraction framework based on Potential Relation and Global Correspondence (PRGC)
PRGC achieves state-of-the-art performance on public benchmarks with higher efficiency and delivers consistent performance gain on complex scenarios of overlapping triples.
arXiv Detail & Related papers (2021-06-18T03:38:07Z) - Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot
Relational Triple Extraction [40.00702385889112]
We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples.
We design a hybrid learning mechanism that bridges text and knowledge concerning both entities and relations.
Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction.
arXiv Detail & Related papers (2020-10-30T04:18:39Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment
Analysis [73.7488524683061]
We propose a novel Iterative Multi-Knowledge Transfer Network (IMKTN) for end-to-end ABSA.
Our IMKTN transfers the task-specific knowledge from any two of the three subtasks to another one at the token level by utilizing a well-designed routing algorithm.
Experimental results on three benchmark datasets demonstrate the effectiveness and superiority of our approach.
arXiv Detail & Related papers (2020-04-04T13:49:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.