Enhancing Large Language Model for Knowledge Graph Completion via Structure-Aware Alignment-Tuning
- URL: http://arxiv.org/abs/2509.01166v1
- Date: Mon, 01 Sep 2025 06:38:11 GMT
- Title: Enhancing Large Language Model for Knowledge Graph Completion via Structure-Aware Alignment-Tuning
- Authors: Yu Liu, Yanan Cao, Xixun Lin, Yanmin Shang, Shi Wang, Shirui Pan,
- Abstract summary: Knowledge graph completion (KGC) aims to infer new knowledge and make predictions from knowledge graphs.<n>Existing methods often ignore the inconsistent representation spaces between natural language and graph structures.<n>We propose SAT, a novel framework that enhances LLMs for KGC via structure-aware alignment-tuning.
- Score: 52.78024385391959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph completion (KGC) aims to infer new knowledge and make predictions from knowledge graphs. Recently, large language models (LLMs) have exhibited remarkable reasoning capabilities. LLM-enhanced KGC methods primarily focus on designing task-specific instructions, achieving promising advancements. However, there are still two critical challenges. First, existing methods often ignore the inconsistent representation spaces between natural language and graph structures. Second, most approaches design separate instructions for different KGC tasks, leading to duplicate works and time-consuming processes. To address these challenges, we propose SAT, a novel framework that enhances LLMs for KGC via structure-aware alignment-tuning. Specifically, we first introduce hierarchical knowledge alignment to align graph embeddings with the natural language space through multi-task contrastive learning. Then, we propose structural instruction tuning to guide LLMs in performing structure-aware reasoning over KGs, using a unified graph instruction combined with a lightweight knowledge adapter. Experimental results on two KGC tasks across four benchmark datasets demonstrate that SAT significantly outperforms state-of-the-art methods, especially in the link prediction task with improvements ranging from 8.7% to 29.8%.
Related papers
- <SOG_k>: One LLM Token for Explicit Graph Structural Understanding [57.017902343605364]
We propose to incorporate one special token SOG_k> to fully represent the Structure Of Graph within a unified token space.<n>SOG_k> empowers LLMs to understand, generate, and reason in a concise and accurate manner.
arXiv Detail & Related papers (2026-02-02T07:55:09Z) - Enrich-on-Graph: Query-Graph Alignment for Complex Reasoning with LLM Enriching [61.824094419641575]
Large Language Models (LLMs) struggle with hallucinations and factual errors in knowledge-intensive scenarios like knowledge graph question answering (KGQA)<n>We attribute this to the semantic gap between structured knowledge graphs (KGs) and unstructured queries, caused by inherent differences in their focuses and structures.<n>Existing methods usually employ resource-intensive, non-scalable reasoning on vanilla KGs, but overlook this gap.<n>We propose a flexible framework, Enrich-on-Graph (EoG), which leverages LLMs' prior knowledge to enrich KGs, bridge the semantic gap between graphs and queries.
arXiv Detail & Related papers (2025-09-25T06:48:52Z) - Guided Navigation in Knowledge-Dense Environments: Structured Semantic Exploration with Guidance Graphs [21.84798899012135]
We propose a novel framework that introduces an intermediate Guidance Graph to bridge unstructured queries and structured knowledge retrieval.<n>The Guidance Graph defines the retrieval space by abstracting the target knowledge' s structure while preserving broader semantic context.<n>Our method achieves superior efficiency and outperforms SOTA, especially on complex tasks.
arXiv Detail & Related papers (2025-08-06T08:47:57Z) - Quantizing Text-attributed Graphs for Semantic-Structural Integration [6.721504414917793]
Text-attributed graphs (TAGs) have emerged as a powerful representation for modeling complex relationships across diverse domains.<n>With the rise of large language models (LLMs), there is growing interest in leveraging their capabilities for graph learning.<n>We propose STAG, a novel self-supervised framework that directly quantizes graph structural information into discrete tokens using a frozen codebook.
arXiv Detail & Related papers (2025-07-20T09:18:02Z) - Filter-then-Generate: Large Language Models with Structure-Text Adapter for Knowledge Graph Completion [20.973071287301067]
Large Language Models (LLMs) present massive inherent knowledge and superior semantic comprehension capability.<n> Empirical evidence suggests that LLMs consistently perform worse than conventional knowledge graph completion approaches.<n>We propose a novel instruction-tuning-based method, namely FtG, to address these challenges.
arXiv Detail & Related papers (2024-12-12T09:22:04Z) - Subgraph-Aware Training of Language Models for Knowledge Graph Completion Using Structure-Aware Contrastive Learning [4.741342276627672]
Fine-tuning pre-trained language models (PLMs) has recently shown a potential to improve knowledge graph completion (KGC)<n>We propose a Subgraph-Aware Training framework for KGC (SATKGC) with two ideas: (i) subgraph-aware mini-batching to encourage hard negative sampling and to mitigate an imbalance in the frequency of entity occurrences during training, and (ii) new contrastive learning to focus more on harder in-batch negative triples and harder positive triples in terms of the structural properties of the knowledge graph.
arXiv Detail & Related papers (2024-07-17T16:25:37Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Prompting Disentangled Embeddings for Knowledge Graph Completion with Pre-trained Language Model [36.433231939256395]
Both graph structures and textual information play a critical role in Knowledge Graph Completion (KGC)<n>We propose a new KGC method named PDKGC with two prompts -- a hard task prompt and a disentangled structure prompt.<n>With the two prompts, PDKGC builds a textual predictor and a structural predictor, respectively, and their combination leads to more comprehensive entity prediction.
arXiv Detail & Related papers (2023-12-04T12:20:25Z) - From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning [63.63840740526497]
We investigate how instruction tuning adjusts pre-trained models with a focus on intrinsic changes.
The impact of instruction tuning is then studied by comparing the explanations derived from the pre-trained and instruction-tuned models.
Our findings reveal three significant impacts of instruction tuning.
arXiv Detail & Related papers (2023-09-30T21:16:05Z) - Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal
Structured Representations [70.41385310930846]
We present an end-to-end framework Structure-CLIP to enhance multi-modal structured representations.
We use scene graphs to guide the construction of semantic negative examples, which results in an increased emphasis on learning structured representations.
A Knowledge-Enhance (KEE) is proposed to leverage SGK as input to further enhance structured representations.
arXiv Detail & Related papers (2023-05-06T03:57:05Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.