From Discrimination to Generation: Knowledge Graph Completion with
Generative Transformer
- URL: http://arxiv.org/abs/2202.02113v2
- Date: Tue, 8 Feb 2022 14:07:30 GMT
- Title: From Discrimination to Generation: Knowledge Graph Completion with
Generative Transformer
- Authors: Xin Xie, Ningyu Zhang, Zhoubo Li, Shumin Deng, Hui Chen, Feiyu Xiong,
Mosha Chen, Huajun Chen
- Abstract summary: We provide an approach GenKGC, which converts knowledge graph completion to sequence-to-sequence generation task with the pre-trained language model.
We introduce relation-guided demonstration and entity-aware hierarchical decoding for better representation learning and fast inference.
We also release a new large-scale Chinese knowledge graph dataset AliopenKG500 for research purpose.
- Score: 41.69537736842654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph completion aims to address the problem of extending a KG with
missing triples. In this paper, we provide an approach GenKGC, which converts
knowledge graph completion to sequence-to-sequence generation task with the
pre-trained language model. We further introduce relation-guided demonstration
and entity-aware hierarchical decoding for better representation learning and
fast inference. Experimental results on three datasets show that our approach
can obtain better or comparable performance than baselines and achieve faster
inference speed compared with previous methods with pre-trained language
models. We also release a new large-scale Chinese knowledge graph dataset
AliopenKG500 for research purpose. Code and datasets are available in
https://github.com/zjunlp/PromptKGC/tree/main/GenKGC.
Related papers
- UnitCoder: Scalable Iterative Code Synthesis with Unit Test Guidance [65.01483640267885]
Large Language Models (LLMs) have demonstrated remarkable capabilities in various tasks, yet code generation remains a major challenge.
We introduce UnitCoder, a systematic pipeline leveraging model-generated unit tests to guide and validate the code generation process.
Our work presents a scalable approach that leverages model-generated unit tests to guide the synthesis of high-quality code data from pre-training corpora.
arXiv Detail & Related papers (2025-02-17T05:37:02Z) - Efficient Relational Context Perception for Knowledge Graph Completion [25.903926643251076]
Knowledge Graphs (KGs) provide a structured representation of knowledge but often suffer from challenges of incompleteness.
Previous knowledge graph embedding models are limited in their ability to capture expressive features.
We propose Triple Receptance Perception architecture to model sequential information, enabling the learning of dynamic context.
arXiv Detail & Related papers (2024-12-31T11:25:58Z) - Towards Better Dynamic Graph Learning: New Architecture and Unified
Library [29.625205125350313]
DyGFormer is a Transformer-based architecture for dynamic graph learning.
DyGLib is a unified library with standard training pipelines and coding interfaces.
arXiv Detail & Related papers (2023-03-23T05:27:32Z) - Knowledge Graph Generation From Text [18.989264255589806]
We propose a novel end-to-end Knowledge Graph (KG) generation system from textual inputs.
The graph nodes are generated first using pretrained language model, followed by a simple edge construction head.
We evaluated the model on a recent WebNLG 2020 Challenge dataset, matching the state-of-the-art performance on text-to-RDF generation task.
arXiv Detail & Related papers (2022-11-18T21:27:13Z) - DORE: Document Ordered Relation Extraction based on Generative Framework [56.537386636819626]
This paper investigates the root cause of the underwhelming performance of the existing generative DocRE models.
We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn.
Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
arXiv Detail & Related papers (2022-10-28T11:18:10Z) - TripleRE: Knowledge Graph Embeddings via Tripled Relation Vectors [2.8838114325185717]
This paper proposes a novel knowledge graph embedding method named TripleRE with two versions.
The first version of TripleRE creatively divide the relationship vector into three parts.
The second version takes advantage of the concept of residual and achieves better performance.
arXiv Detail & Related papers (2022-09-17T07:42:37Z) - Few-shot Knowledge Graph-to-Text Generation with Pretrained Language
Models [42.38563175680914]
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG)
Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation.
arXiv Detail & Related papers (2021-06-03T06:48:00Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.