From Discrimination to Generation: Knowledge Graph Completion with
Generative Transformer
- URL: http://arxiv.org/abs/2202.02113v2
- Date: Tue, 8 Feb 2022 14:07:30 GMT
- Title: From Discrimination to Generation: Knowledge Graph Completion with
Generative Transformer
- Authors: Xin Xie, Ningyu Zhang, Zhoubo Li, Shumin Deng, Hui Chen, Feiyu Xiong,
Mosha Chen, Huajun Chen
- Abstract summary: We provide an approach GenKGC, which converts knowledge graph completion to sequence-to-sequence generation task with the pre-trained language model.
We introduce relation-guided demonstration and entity-aware hierarchical decoding for better representation learning and fast inference.
We also release a new large-scale Chinese knowledge graph dataset AliopenKG500 for research purpose.
- Score: 41.69537736842654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph completion aims to address the problem of extending a KG with
missing triples. In this paper, we provide an approach GenKGC, which converts
knowledge graph completion to sequence-to-sequence generation task with the
pre-trained language model. We further introduce relation-guided demonstration
and entity-aware hierarchical decoding for better representation learning and
fast inference. Experimental results on three datasets show that our approach
can obtain better or comparable performance than baselines and achieve faster
inference speed compared with previous methods with pre-trained language
models. We also release a new large-scale Chinese knowledge graph dataset
AliopenKG500 for research purpose. Code and datasets are available in
https://github.com/zjunlp/PromptKGC/tree/main/GenKGC.
Related papers
- SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Towards Better Dynamic Graph Learning: New Architecture and Unified
Library [29.625205125350313]
DyGFormer is a Transformer-based architecture for dynamic graph learning.
DyGLib is a unified library with standard training pipelines and coding interfaces.
arXiv Detail & Related papers (2023-03-23T05:27:32Z) - Knowledge Graph Generation From Text [18.989264255589806]
We propose a novel end-to-end Knowledge Graph (KG) generation system from textual inputs.
The graph nodes are generated first using pretrained language model, followed by a simple edge construction head.
We evaluated the model on a recent WebNLG 2020 Challenge dataset, matching the state-of-the-art performance on text-to-RDF generation task.
arXiv Detail & Related papers (2022-11-18T21:27:13Z) - Knowledge Graph Refinement based on Triplet BERT-Networks [0.0]
This paper adopts a transformer-based triplet network creating an embedding space that clusters the information about an entity or relation in the Knowledge Graph.
It creates textual sequences from facts and fine-tunes a triplet network of pre-trained transformer-based language models.
We show that GilBERT achieves better or comparable results to the state-of-the-art performance on these two refinement tasks.
arXiv Detail & Related papers (2022-11-18T19:01:21Z) - DORE: Document Ordered Relation Extraction based on Generative Framework [56.537386636819626]
This paper investigates the root cause of the underwhelming performance of the existing generative DocRE models.
We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn.
Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
arXiv Detail & Related papers (2022-10-28T11:18:10Z) - TripleRE: Knowledge Graph Embeddings via Tripled Relation Vectors [2.8838114325185717]
This paper proposes a novel knowledge graph embedding method named TripleRE with two versions.
The first version of TripleRE creatively divide the relationship vector into three parts.
The second version takes advantage of the concept of residual and achieves better performance.
arXiv Detail & Related papers (2022-09-17T07:42:37Z) - Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models [42.38563175680914]
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG)
Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.
arXiv Detail & Related papers (2021-06-03T06:48:00Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.