Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge
Graph Completion via Conditional Soft Prompting
- URL: http://arxiv.org/abs/2307.01709v1
- Date: Tue, 4 Jul 2023 13:24:04 GMT
- Title: Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge
Graph Completion via Conditional Soft Prompting
- Authors: Chen Chen, Yufei Wang, Aixin Sun, Bing Li and Kwok-Yan Lam
- Abstract summary: This paper proposes CSProm-KG (Conditional Soft Prompts for KGC) which maintains a balance between structural information and textual knowledge.
We verify the effectiveness of CSProm-KG on three popular static KGC benchmarks WN18RR, FB15K-237 and Wikidata5M, and two temporal KGC benchmarks ICEWS14 and ICEWS05-15.
- Score: 35.79478289974962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graph Completion (KGC) often requires both KG structural and
textual information to be effective. Pre-trained Language Models (PLMs) have
been used to learn the textual information, usually under the fine-tune
paradigm for the KGC task. However, the fine-tuned PLMs often overwhelmingly
focus on the textual information and overlook structural knowledge. To tackle
this issue, this paper proposes CSProm-KG (Conditional Soft Prompts for KGC)
which maintains a balance between structural information and textual knowledge.
CSProm-KG only tunes the parameters of Conditional Soft Prompts that are
generated by the entities and relations representations. We verify the
effectiveness of CSProm-KG on three popular static KGC benchmarks WN18RR,
FB15K-237 and Wikidata5M, and two temporal KGC benchmarks ICEWS14 and
ICEWS05-15. CSProm-KG outperforms competitive baseline models and sets new
state-of-the-art on these benchmarks. We conduct further analysis to show (i)
the effectiveness of our proposed components, (ii) the efficiency of CSProm-KG,
and (iii) the flexibility of CSProm-KG.
Related papers
- Subgraph-Aware Training of Language Models for Knowledge Graph Completion Using Structure-Aware Contrastive Learning [4.741342276627672]
Fine-tuning pre-trained language models (PLMs) has recently shown a potential to improve knowledge graph completion (KGC)
We propose a Subgraph-Aware Training framework for KGC (SATKGC) with two ideas: (i) subgraph-aware mini-batching to encourage hard negative sampling and to mitigate an imbalance in the frequency of entity occurrences during training, and (ii) new contrastive learning to focus more on harder in-batch negative triples and harder positive triples in terms of the structural properties of the knowledge graph.
arXiv Detail & Related papers (2024-07-17T16:25:37Z) - KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge [63.19837262782962]
Knowledge Graph Embedding (KGE) techniques are crucial in learning compact representations of entities and relations within a knowledge graph.
This study introduces KG-FIT, which builds a semantically coherent hierarchical structure of entity clusters.
Experiments on the benchmark datasets FB15K-237, YAGO3-10, and PrimeKG demonstrate the superiority of KG-FIT over state-of-the-art pre-trained language model-based methods.
arXiv Detail & Related papers (2024-05-26T03:04:26Z) - Multi-perspective Improvement of Knowledge Graph Completion with Large
Language Models [95.31941227776711]
We propose MPIKGC to compensate for the deficiency of contextualized knowledge and improve KGC by querying large language models (LLMs)
We conducted extensive evaluation of our framework based on four description-based KGC models and four datasets, for both link prediction and triplet classification tasks.
arXiv Detail & Related papers (2024-03-04T12:16:15Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Prompting Disentangled Embeddings for Knowledge Graph Completion with
Pre-trained Language Model [38.00241874974804]
Both graph structures and textual information play a critical role in Knowledge Graph Completion (KGC)
We propose a new KGC method named PDKGC with two prompts -- a hard task prompt and a disentangled structure prompt.
With the two prompts, PDKGC builds a textual predictor and a structural predictor, respectively, and their combination leads to more comprehensive entity prediction.
arXiv Detail & Related papers (2023-12-04T12:20:25Z) - KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using
Large Language Models [18.20425100517317]
We propose KG-GPT, a framework leveraging large language models for tasks employing knowledge graphs.
KG-GPT comprises three steps: Sentence, Graph Retrieval, and Inference, each aimed at partitioning sentences, retrieving relevant graph components, and deriving logical conclusions.
We evaluate KG-GPT using KG-based fact verification and KGQA benchmarks, with the model showing competitive and robust performance, even outperforming several fully-supervised models.
arXiv Detail & Related papers (2023-10-17T12:51:35Z) - Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic Enhancement [8.472388165833292]
We introduce a framework termed constrained prompts for KGC (CP-KGC)
This framework designs prompts that adapt to different datasets to enhance semantic richness.
This study extends the performance limits of existing models and promotes further integration of KGC with large language models.
arXiv Detail & Related papers (2023-10-12T12:31:23Z) - Knowledge Is Flat: A Seq2Seq Generative Framework for Various Knowledge
Graph Completion [18.581223721903147]
KG-S2S is a Seq2Seq generative framework that could tackle different verbalizable graph structures.
We show that KG-S2S outperforms many competitive baselines.
arXiv Detail & Related papers (2022-09-15T13:49:40Z) - UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding
with Text-to-Text Language Models [170.88745906220174]
We propose the SKG framework, which unifies 21 SKG tasks into a text-to-text format.
We show that UnifiedSKG achieves state-of-the-art performance on almost all of the 21 tasks.
We also use UnifiedSKG to conduct a series of experiments on structured knowledge encoding variants across SKG tasks.
arXiv Detail & Related papers (2022-01-16T04:36:18Z) - Inductive Learning on Commonsense Knowledge Graph Completion [89.72388313527296]
Commonsense knowledge graph (CKG) is a special type of knowledge graph (CKG) where entities are composed of free-form text.
We propose to study the inductive learning setting for CKG completion where unseen entities may present at test time.
InductivE significantly outperforms state-of-the-art baselines in both standard and inductive settings on ATOMIC and ConceptNet benchmarks.
arXiv Detail & Related papers (2020-09-19T16:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.