Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot
Relational Triple Extraction
- URL: http://arxiv.org/abs/2010.16059v1
- Date: Fri, 30 Oct 2020 04:18:39 GMT
- Title: Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot
Relational Triple Extraction
- Authors: Haiyang Yu, Ningyu Zhang, Shumin Deng, Hongbin Ye, Wei Zhang, Huajun
Chen
- Abstract summary: We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples.
We design a hybrid learning mechanism that bridges text and knowledge concerning both entities and relations.
Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction.
- Score: 40.00702385889112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current supervised relational triple extraction approaches require huge
amounts of labeled data and thus suffer from poor performance in few-shot
settings. However, people can grasp new knowledge by learning a few instances.
To this end, we take the first step to study the few-shot relational triple
extraction, which has not been well understood. Unlike previous single-task
few-shot problems, relational triple extraction is more challenging as the
entities and relations have implicit correlations. In this paper, We propose a
novel multi-prototype embedding network model to jointly extract the
composition of relational triples, namely, entity pairs and corresponding
relations. To be specific, we design a hybrid prototypical learning mechanism
that bridges text and knowledge concerning both entities and relations. Thus,
implicit correlations between entities and relations are injected.
Additionally, we propose a prototype-aware regularization to learn more
representative prototypes. Experimental results demonstrate that the proposed
method can improve the performance of the few-shot triple extraction.
Related papers
- BitCoin: Bidirectional Tagging and Supervised Contrastive Learning based
Joint Relational Triple Extraction Framework [16.930809038479666]
We propose BitCoin, an innovative Bidirectional tagging and supervised Contrastive learning based joint relational triple extraction framework.
Specifically, we design a supervised contrastive learning method that considers multiple positives per anchor rather than restricting it to just one positive.
Our framework implements taggers in two directions, enabling triples extraction from subject to object and object to subject.
arXiv Detail & Related papers (2023-09-21T07:55:54Z) - Query-based Instance Discrimination Network for Relational Triple
Extraction [39.35417927570248]
Joint entity and relation extraction has been a core task in the field of information extraction.
Recent approaches usually consider the extraction of relational triples from a stereoscopic perspective.
We propose a novel query-based approach to construct instance-level representations for relational triples.
arXiv Detail & Related papers (2022-11-03T13:34:56Z) - RelationPrompt: Leveraging Prompts to Generate Synthetic Data for
Zero-Shot Relation Triplet Extraction [65.4337085607711]
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE)
Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
We propose to synthesize relation examples by prompting language models to generate structured texts.
arXiv Detail & Related papers (2022-03-17T05:55:14Z) - OneRel:Joint Entity and Relation Extraction with One Module in One Step [42.576188878294886]
Joint entity and relation extraction is an essential task in natural language processing and knowledge graph construction.
We propose a novel joint entity and relation extraction model, named OneRel, which casts joint extraction as a fine-grained triple classification problem.
arXiv Detail & Related papers (2022-03-10T15:09:59Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - PRGC: Potential Relation and Global Correspondence Based Joint
Relational Triple Extraction [23.998135821388203]
We propose a joint relational triple extraction framework based on Potential Relation and Global Correspondence (PRGC)
PRGC achieves state-of-the-art performance on public benchmarks with higher efficiency and delivers consistent performance gain on complex scenarios of overlapping triples.
arXiv Detail & Related papers (2021-06-18T03:38:07Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Contrastive Triple Extraction with Generative Transformer [72.21467482853232]
We introduce a novel model, contrastive triple extraction with a generative transformer.
Specifically, we introduce a single shared transformer module for encoder-decoder-based generation.
To generate faithful results, we propose a novel triplet contrastive training object.
arXiv Detail & Related papers (2020-09-14T05:29:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.