Beyond Textual Context: Structural Graph Encoding with Adaptive Space Alignment to alleviate the hallucination of LLMs
- URL: http://arxiv.org/abs/2509.22251v1
- Date: Fri, 26 Sep 2025 12:14:01 GMT
- Title: Beyond Textual Context: Structural Graph Encoding with Adaptive Space Alignment to alleviate the hallucination of LLMs
- Authors: Yifang Zhang, Pengfei Duan, Yiwen Yang, Shengwu Xiong,
- Abstract summary: The SSKG-LLM is an innovative model architecture that efficiently integrate both the Structural information of KGs into the reasoning processes of Large Language Models.<n>We conduct extensive experiments and provide a detailed analysis to explore how incorporating the structural information of KGs can enhance the factual reasoning abilities of LLMs.
- Score: 15.260879306368674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Currently, the main approach for Large Language Models (LLMs) to tackle the hallucination issue is incorporating Knowledge Graphs(KGs).However, LLMs typically treat KGs as plain text, extracting only semantic information and limiting their use of the crucial structural aspects of KGs. Another challenge is the gap between the embedding spaces of KGs encoders and LLMs text embeddings, which hinders the effective integration of structured knowledge. To overcome these obstacles, we put forward the SSKG-LLM, an innovative model architecture that is designed to efficiently integrate both the Structural and Semantic information of KGs into the reasoning processes of LLMs. SSKG-LLM incorporates the Knowledge Graph Retrieval (KGR) module and the Knowledge Graph Encoding (KGE) module to preserve semantics while utilizing structure. Then, the Knowledge Graph Adaptation (KGA) module is incorporated to enable LLMs to understand KGs embeddings. We conduct extensive experiments and provide a detailed analysis to explore how incorporating the structural information of KGs can enhance the factual reasoning abilities of LLMs. Our code are available at https://github.com/yfangZhang/SSKG-LLM.
Related papers
- Knowledge Reasoning Language Model: Unifying Knowledge and Language for Inductive Knowledge Graph Reasoning [47.967495648005986]
We propose a Knowledge Reasoning Language Model (KRLM) that achieves unified coordination between LLM knowledge and KG context.<n>Extensive experimental results on 25 real-world inductive KGR datasets demonstrate the significant superiority of the proposed KRLM.
arXiv Detail & Related papers (2025-10-15T02:11:58Z) - Are Large Language Models Effective Knowledge Graph Constructors? [26.60279256406507]
Knowledge graphs (KGs) are vital for knowledge-intensive tasks and have shown promise in reducing hallucinations in large language models (LLMs)<n>We propose a hierarchical extraction framework that organizes information at multiple levels, enabling the creation of semantically rich and well-structured KGs.<n>Using state-of-the-art LLMs, we extract and construct knowledge graphs and evaluate them comprehensively from both structural and semantic perspectives.
arXiv Detail & Related papers (2025-10-13T11:37:48Z) - Enhancing Large Language Models with Reliable Knowledge Graphs [0.6345523830122166]
Knowledge Graphs offer a promising solution to ground Large Language Models in verified knowledge.<n>Their potential remains constrained by inherent noise, incompleteness, and the complexity of integrating their rigid structure with the flexible reasoning of LLMs.<n>This thesis addresses these limitations through a cohesive framework that enhances LLMs by refining and leveraging reliable KGs.
arXiv Detail & Related papers (2025-06-16T07:43:18Z) - ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM [3.864321514889099]
ClaimPKG is an end-to-end framework that seamlessly integrates LLM reasoning with structured knowledge from knowledge graphs (KGs)<n>ClaimPKG achieves state-of-the-art performance, outperforming strong baselines in this research field by 9%-12% accuracy points across multiple categories.
arXiv Detail & Related papers (2025-05-28T16:34:14Z) - LightPROF: A Lightweight Reasoning Framework for Large Language Model on Knowledge Graph [57.382255728234064]
Large Language Models (LLMs) have impressive capabilities in text understanding and zero-shot reasoning.<n> Knowledge Graphs (KGs) provide rich and reliable contextual information for the reasoning process of LLMs.<n>We propose a novel Lightweight and efficient Prompt learning-ReasOning Framework for KGQA (LightPROF)
arXiv Detail & Related papers (2025-04-04T03:03:47Z) - Enhancing Large Language Models (LLMs) for Telecommunications using Knowledge Graphs and Retrieval-Augmented Generation [52.8352968531863]
Large language models (LLMs) have made significant progress in general-purpose natural language processing tasks.<n>This paper presents a novel framework that combines knowledge graph (KG) and retrieval-augmented generation (RAG) techniques to enhance LLM performance in the telecom domain.
arXiv Detail & Related papers (2025-03-31T15:58:08Z) - GLTW: Joint Improved Graph Transformer and LLM via Three-Word Language for Knowledge Graph Completion [52.026016846945424]
We propose a new method called GLTW, which encodes the structural information of KGs and merges it with Large Language Models.<n>Specifically, we introduce an improved Graph Transformer (iGT) that effectively encodes subgraphs with both local and global structural information.<n>Also, we develop a subgraph-based multi-classification training objective, using all entities within KG as classification objects, to boost learning efficiency.
arXiv Detail & Related papers (2025-02-17T06:02:59Z) - Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - KG-RAG: Bridging the Gap Between Knowledge and Creativity [0.0]
Large Language Model Agents (LMAs) face issues such as information hallucinations, catastrophic forgetting, and limitations in processing long contexts.
This paper introduces a KG-RAG (Knowledge Graph-Retrieval Augmented Generation) pipeline to enhance the knowledge capabilities of LMAs.
Preliminary experiments on the ComplexWebQuestions dataset demonstrate notable improvements in the reduction of hallucinated content.
arXiv Detail & Related papers (2024-05-20T14:03:05Z) - Large Language Models Can Better Understand Knowledge Graphs Than We Thought [13.336418752729987]
We study how large language models (LLMs) process and interpret knowledge graphs (KGs)<n>At the literal level, we reveal LLMs' preferences for various input formats.<n>At the attention distribution level, we discuss the underlying mechanisms driving these preferences.
arXiv Detail & Related papers (2024-02-18T10:44:03Z) - Making Large Language Models Perform Better in Knowledge Graph Completion [42.175953129260236]
Large language model (LLM) based knowledge graph completion (KGC) aims to predict the missing triples in the KGs with LLMs.
In this paper, we explore methods to incorporate structural information into the LLMs, with the overarching goal of facilitating structure-aware reasoning.
arXiv Detail & Related papers (2023-10-10T14:47:09Z) - Unifying Large Language Models and Knowledge Graphs: A Roadmap [61.824618473293725]
Large language models (LLMs) are making new waves in the field of natural language processing and artificial intelligence.
Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge.
arXiv Detail & Related papers (2023-06-14T07:15:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.