Injecting Knowledge Graphs into Large Language Models
- URL: http://arxiv.org/abs/2505.07554v1
- Date: Mon, 12 May 2025 13:31:26 GMT
- Title: Injecting Knowledge Graphs into Large Language Models
- Authors: Erica Coppolillo,
- Abstract summary: We build on encoding techniques which integrate graph embeddings within the Large Language Models as tokens.<n>Our approach is model-agnostic, resource-efficient, and compatible with any LLMs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Integrating structured knowledge from Knowledge Graphs (KGs) into Large Language Models (LLMs) remains a key challenge for symbolic reasoning. Existing methods mainly rely on prompt engineering or fine-tuning, which lose structural fidelity or incur high computational costs. Building on recent encoding techniques which integrate graph embeddings within the LLM input as tokens, we extend this paradigm to the KG domain by leveraging Knowledge Graph Embedding (KGE) models, thus enabling graph-aware reasoning. Our approach is model-agnostic, resource-efficient, and compatible with any LLMs. Extensive experimentation on synthetic and real-world datasets shows that our method improves reasoning performance over established baselines, further achieving the best trade-off in terms of accuracy and efficiency against state-of-the-art LLMs.
Related papers
- Ontology-Enhanced Knowledge Graph Completion using Large Language Models [20.080012331845065]
Large Language Models (LLMs) have been extensively adopted in Knowledge Graph Completion (KGC)<n>We propose an enhanced KGC method using LLMs -- OL-KGC.<n>It first leverages neural perceptual mechanisms to effectively embed structural information into the textual space, and then uses an automated extraction algorithm to retrieve ontological knowledge.
arXiv Detail & Related papers (2025-07-28T09:00:48Z) - Quantizing Text-attributed Graphs for Semantic-Structural Integration [6.721504414917793]
Text-attributed graphs (TAGs) have emerged as a powerful representation for modeling complex relationships across diverse domains.<n>With the rise of large language models (LLMs), there is growing interest in leveraging their capabilities for graph learning.<n>We propose STAG, a novel self-supervised framework that directly quantizes graph structural information into discrete tokens using a frozen codebook.
arXiv Detail & Related papers (2025-07-20T09:18:02Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - GraphOmni: A Comprehensive and Extendable Benchmark Framework for Large Language Models on Graph-theoretic Tasks [15.147178364098034]
We present Graph Omni, a benchmark framework for evaluating graph reasoning capabilities of LLMs.<n>Our findings emphasize that no single serialization or prompting strategy consistently outperforms others.<n>Motivated by these insights, we propose a reinforcement learning-based approach that dynamically selects the best serialization-prompt pairings.
arXiv Detail & Related papers (2025-04-17T09:01:16Z) - Grounding LLM Reasoning with Knowledge Graphs [4.279373869671241]
We propose integrating reasoning strategies with Knowledge Graphs to anchor every step or "thought" of the reasoning chains in KG data.<n>We evaluate both agentic and automated search methods across several reasoning strategies, including Chain-of-Thought (CoT), Tree-of-Thought (ToT), and Graph-of-Thought (GoT)<n>Our experiments demonstrate that this approach consistently outperforms baseline models.
arXiv Detail & Related papers (2025-02-18T19:20:46Z) - In-Context Learning with Topological Information for Knowledge Graph Completion [3.035601871864059]
We develop a novel method that incorporates topological information through in-context learning to enhance knowledge graph performance.<n>Our approach achieves strong performance in the transductive setting i.e., nodes in the test graph dataset are present in the training graph dataset.<n>Our method demonstrates superior performance compared to baselines on the ILPC-small and ILPC-large datasets.
arXiv Detail & Related papers (2024-12-11T19:29:36Z) - Comprehending Knowledge Graphs with Large Language Models for Recommender Systems [13.270018897057293]
We propose a novel method called CoLaKG to improve knowledge graphs.<n>By employing the item-centered subgraph extraction and prompt engineering, it can accurately understand the local information.<n>In addition, through the semantic-based retrieval module, each item is enriched by related items from the entire knowledge graph.
arXiv Detail & Related papers (2024-10-16T04:44:34Z) - How to Make LLMs Strong Node Classifiers? [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, such as Graph Neural Networks (GNNs) and Graph Transformers (GTs)<n>We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art (SOTA) GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - Bridging LLMs and KGs without Fine-Tuning: Intermediate Probing Meets Subgraph-Aware Entity Descriptions [49.36683223327633]
Large Language Models (LLMs) encapsulate extensive world knowledge and exhibit powerful context modeling capabilities.<n>We propose a novel framework that synergizes the strengths of LLMs with robust knowledge representation to enable effective and efficient KGC.<n>We achieve a 47% relative improvement over previous methods based on non-fine-tuned LLMs and, to our knowledge, are the first to achieve classification performance comparable to fine-tuned LLMs.
arXiv Detail & Related papers (2024-08-13T10:15:55Z) - Proficient Graph Neural Network Design by Accumulating Knowledge on Large Language Models [20.31388126105889]
DesiGNN is a knowledge-centered framework that converts past model design experiences into structured, fine-grained knowledge priors.<n>By constructing a solid meta-knowledge between unseen graph understanding and known effective architecture patterns, DesiGNN can deliver top-5.77% initial model proposals for unseen datasets within seconds.
arXiv Detail & Related papers (2024-08-13T08:22:01Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Disentangled Representation Learning with Large Language Models for
Text-Attributed Graphs [57.052160123387104]
We present the Disentangled Graph-Text Learner (DGTL) model, which is able to enhance the reasoning and predicting capabilities of LLMs for TAGs.
Our proposed DGTL model incorporates graph structure information through tailored disentangled graph neural network (GNN) layers.
Experimental evaluations demonstrate the effectiveness of the proposed DGTL model on achieving superior or comparable performance over state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-27T14:00:04Z) - Great Truths are Always Simple: A Rather Simple Knowledge Encoder for
Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models [89.98762327725112]
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems.
For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models(PTMs) with a knowledge-aware graph neural network(GNN) encoder.
Despite the effectiveness, these approaches are built on heavy architectures, and can't clearly explain how external knowledge resources improve the reasoning capacity of PTMs.
arXiv Detail & Related papers (2022-05-04T01:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.