Prompting Disentangled Embeddings for Knowledge Graph Completion with
Pre-trained Language Model
- URL: http://arxiv.org/abs/2312.01837v1
- Date: Mon, 4 Dec 2023 12:20:25 GMT
- Title: Prompting Disentangled Embeddings for Knowledge Graph Completion with
Pre-trained Language Model
- Authors: Yuxia Geng, Jiaoyan Chen, Yuhang Zeng, Zhuo Chen, Wen Zhang, Jeff Z.
Pan, Yuxiang Wang, Xiaoliang Xu
- Abstract summary: Both graph structures and textual information play a critical role in Knowledge Graph Completion (KGC)
We propose a new KGC method named PDKGC with two prompts -- a hard task prompt and a disentangled structure prompt.
With the two prompts, PDKGC builds a textual predictor and a structural predictor, respectively, and their combination leads to more comprehensive entity prediction.
- Score: 38.00241874974804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Both graph structures and textual information play a critical role in
Knowledge Graph Completion (KGC). With the success of Pre-trained Language
Models (PLMs) such as BERT, they have been applied for text encoding for KGC.
However, the current methods mostly prefer to fine-tune PLMs, leading to huge
training costs and limited scalability to larger PLMs. In contrast, we propose
to utilize prompts and perform KGC on a frozen PLM with only the prompts
trained. Accordingly, we propose a new KGC method named PDKGC with two prompts
-- a hard task prompt which is to adapt the KGC task to the PLM pre-training
task of token prediction, and a disentangled structure prompt which learns
disentangled graph representation so as to enable the PLM to combine more
relevant structure knowledge with the text information. With the two prompts,
PDKGC builds a textual predictor and a structural predictor, respectively, and
their combination leads to more comprehensive entity prediction. Solid
evaluation on two widely used KGC datasets has shown that PDKGC often
outperforms the baselines including the state-of-the-art, and its components
are all effective. Our codes and data are available at
https://github.com/genggengcss/PDKGC.
Related papers
- Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models [95.31941227776711]
We propose MPIKGC to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs)
We conducted extensive evaluation of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.
arXiv Detail & Related papers (2024-03-04T12:16:15Z) - KICGPT: Large Language Model with Knowledge in Context for Knowledge
Graph Completion [27.405080941584533]
We propose KICGPT, a framework that integrates a large language model and a triple-based KGC retriever.
It alleviates the long-tail problem without incurring additional training overhead.
Empirical results on benchmark datasets demonstrate the effectiveness of KICGPT with smaller training overhead and no finetuning.
arXiv Detail & Related papers (2024-02-04T08:01:07Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Does Pre-trained Language Model Actually Infer Unseen Links in Knowledge Graph Completion? [32.645448509968226]
Knowledge graphs (KGs) consist of links that describe relationships between entities.
Knowledge Graph Completion (KGC) is a task that infers unseen relationships between entities in a KG.
Traditional embedding-based KGC methods, such as RESCAL, infer missing links using only the knowledge from training data.
Recent Pre-trained Language Model (PLM)-based KGC utilizes knowledge obtained during pre-training.
arXiv Detail & Related papers (2023-11-15T16:56:49Z) - Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic Enhancement [8.472388165833292]
We introduce a framework termed constrained prompts for KGC (CP-KGC)
This framework designs prompts that adapt to different datasets to enhance semantic richness.
This study extends the performance limits of existing models and promotes further integration of KGC with large language models.
arXiv Detail & Related papers (2023-10-12T12:31:23Z) - Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge
Graph Completion via Conditional Soft Prompting [35.79478289974962]
This paper proposes CSProm-KG (Conditional Soft Prompts for KGC) which maintains a balance between structural information and textual knowledge.
We verify the effectiveness of CSProm-KG on three popular static KGC benchmarks WN18RR, FB15K-237 and Wikidata5M, and two temporal KGC benchmarks ICEWS14 and ICEWS05-15.
arXiv Detail & Related papers (2023-07-04T13:24:04Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - Knowledgeable Salient Span Mask for Enhancing Language Models as
Knowledge Base [51.55027623439027]
We develop two solutions to help the model learn more knowledge from unstructured text in a fully self-supervised manner.
To our best knowledge, we are the first to explore fully self-supervised learning of knowledge in continual pre-training.
arXiv Detail & Related papers (2022-04-17T12:33:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.