Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer
- URL: http://arxiv.org/abs/2303.03922v1
- Date: Fri, 3 Mar 2023 02:58:17 GMT
- Title: Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer
- Authors: Wen Zhang, Yushan Zhu, Mingyang Chen, Yuxia Geng, Yufeng Huang, Yajing
Xu, Wenting Song, Huajun Chen
- Abstract summary: We propose a knowledge graph pretraining model KGTransformer.
We pretrain KGTransformer with three self-supervised tasks with sampled sub-graphs as input.
We evaluate KGTransformer on three tasks, triple classification, zero-shot image classification, and question answering.
- Score: 22.8376402253312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KG) are essential background knowledge providers in many
tasks. When designing models for KG-related tasks, one of the key tasks is to
devise the Knowledge Representation and Fusion (KRF) module that learns the
representation of elements from KGs and fuses them with task representations.
While due to the difference of KGs and perspectives to be considered during
fusion across tasks, duplicate and ad hoc KRF modules design are conducted
among tasks. In this paper, we propose a novel knowledge graph pretraining
model KGTransformer that could serve as a uniform KRF module in diverse
KG-related tasks. We pretrain KGTransformer with three self-supervised tasks
with sampled sub-graphs as input. For utilization, we propose a general
prompt-tuning mechanism regarding task data as a triple prompt to allow
flexible interactions between task KGs and task data. We evaluate pretrained
KGTransformer on three tasks, triple classification, zero-shot image
classification, and question answering. KGTransformer consistently achieves
better results than specifically designed task models. Through experiments, we
justify that the pretrained KGTransformer could be used off the shelf as a
general and effective KRF module across KG-related tasks. The code and datasets
are available at https://github.com/zjukg/KGTransformer.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Task-Oriented GNNs Training on Large Knowledge Graphs for Accurate and Efficient Modeling [5.460112864687281]
This paper proposes KG-TOSA, an approach to automate the TOSG extraction for task-oriented HGNN training on a large Knowledge Graph (KG)
KG-TOSA helps state-of-the-art HGNN methods reduce training time and memory usage by up to 70% while improving the model performance, e.g., accuracy and inference time.
arXiv Detail & Related papers (2024-03-09T01:17:26Z) - Collaborative Knowledge Graph Fusion by Exploiting the Open Corpus [59.20235923987045]
It is challenging to enrich a Knowledge Graph with newly harvested triples while maintaining the quality of the knowledge representation.
This paper proposes a system to refine a KG using information harvested from an additional corpus.
arXiv Detail & Related papers (2022-06-15T12:16:10Z) - MEKER: Memory Efficient Knowledge Embedding Representation for Link
Prediction and Question Answering [65.62309538202771]
Knowledge Graphs (KGs) are symbolically structured storages of facts.
KG embedding contains concise data used in NLP tasks requiring implicit information about the real world.
We propose a memory-efficient KG embedding model, which yields SOTA-comparable performance on link prediction tasks and KG-based Question Answering.
arXiv Detail & Related papers (2022-04-22T10:47:03Z) - Sequence-to-Sequence Knowledge Graph Completion and Question Answering [8.207403859762044]
We show that an off-the-shelf encoder-decoder Transformer model can serve as a scalable and versatile KGE model.
We achieve this by posing KG link prediction as a sequence-to-sequence task and exchange the triple scoring approach taken by prior KGE methods with autoregressive decoding.
arXiv Detail & Related papers (2022-03-19T13:01:49Z) - Identify, Align, and Integrate: Matching Knowledge Graphs to Commonsense
Reasoning Tasks [81.03233931066009]
It is critical to select a knowledge graph (KG) that is well-aligned with the given task's objective.
We show an approach to assess how well a candidate KG can correctly identify and accurately fill in gaps of reasoning for a task.
We show this KG-to-task match in 3 phases: knowledge-task identification, knowledge-task alignment, and knowledge-task integration.
arXiv Detail & Related papers (2021-04-20T18:23:45Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.