Reasoning of Large Language Models over Knowledge Graphs with Super-Relations
- URL: http://arxiv.org/abs/2503.22166v1
- Date: Fri, 28 Mar 2025 06:11:04 GMT
- Title: Reasoning of Large Language Models over Knowledge Graphs with Super-Relations
- Authors: Song Wang, Junhong Lin, Xiaojie Guo, Julian Shun, Jundong Li, Yada Zhu,
- Abstract summary: We propose the ReKnoS framework, which aims to Reason over Knowledge Graphs with Super-Relations.<n>Our framework's key advantages include the inclusion of multiple relation paths through super-relations.<n>The results demonstrate the superior performance of ReKnoS over existing state-of-the-art baselines, with an average accuracy gain of 2.92%.
- Score: 53.14275361052276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While large language models (LLMs) have made significant progress in processing and reasoning over knowledge graphs, current methods suffer from a high non-retrieval rate. This limitation reduces the accuracy of answering questions based on these graphs. Our analysis reveals that the combination of greedy search and forward reasoning is a major contributor to this issue. To overcome these challenges, we introduce the concept of super-relations, which enables both forward and backward reasoning by summarizing and connecting various relational paths within the graph. This holistic approach not only expands the search space, but also significantly improves retrieval efficiency. In this paper, we propose the ReKnoS framework, which aims to Reason over Knowledge Graphs with Super-Relations. Our framework's key advantages include the inclusion of multiple relation paths through super-relations, enhanced forward and backward reasoning capabilities, and increased efficiency in querying LLMs. These enhancements collectively lead to a substantial improvement in the successful retrieval rate and overall reasoning performance. We conduct extensive experiments on nine real-world datasets to evaluate ReKnoS, and the results demonstrate the superior performance of ReKnoS over existing state-of-the-art baselines, with an average accuracy gain of 2.92%.
Related papers
- A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond [88.5807076505261]
Large Reasoning Models (LRMs) have demonstrated strong performance gains by scaling up the length of Chain-of-Thought (CoT) reasoning during inference.<n>A growing concern lies in their tendency to produce excessively long reasoning traces.<n>This inefficiency introduces significant challenges for training, inference, and real-world deployment.
arXiv Detail & Related papers (2025-03-27T15:36:30Z) - Fairness-Driven LLM-based Causal Discovery with Active Learning and Dynamic Scoring [1.5498930424110338]
Causal discovery (CD) plays a pivotal role in numerous scientific fields by clarifying the causal relationships that underlie phenomena observed in diverse disciplines.<n>Despite significant advancements in CD algorithms, their application faces challenges due to the high computational demands and complexities of large-scale data.<n>This paper introduces a framework that leverages Large Language Models (LLMs) for CD, utilizing a metadata-based approach akin to the reasoning processes of human experts.
arXiv Detail & Related papers (2025-03-21T22:58:26Z) - HuixiangDou2: A Robustly Optimized GraphRAG Approach [11.91228019623924]
Graph-based Retrieval-Augmented Generation (GraphRAG) addresses this by structuring it as a graph for dynamic retrieval.
We introduce HuixiangDou2, a robustly optimized GraphRAG framework.
Specifically, we leverage the effectiveness of dual-level retrieval and optimize its performance in a 32k context.
arXiv Detail & Related papers (2025-03-09T06:20:24Z) - Graph-Augmented Reasoning: Evolving Step-by-Step Knowledge Graph Retrieval for LLM Reasoning [55.6623318085391]
Recent large language model (LLM) reasoning suffers from limited domain knowledge, susceptibility to hallucinations, and constrained reasoning depth.<n>This paper presents the first investigation into integrating step-wise knowledge graph retrieval with step-wise reasoning.<n>We propose KG-RAR, a framework centered on process-oriented knowledge graph construction, a hierarchical retrieval strategy, and a universal post-retrieval processing and reward model.
arXiv Detail & Related papers (2025-03-03T15:20:41Z) - Causal Graphs Meet Thoughts: Enhancing Complex Reasoning in Graph-Augmented LLMs [4.701165676405066]
It is critical not only to retrieve relevant information but also to provide causal reasoning and explainability.<n>This paper proposes a novel pipeline that filters large knowledge graphs to emphasize cause-effect edges.<n> Experiments on medical question-answering tasks show consistent gains, with up to a 10% absolute improvement.
arXiv Detail & Related papers (2025-01-24T19:31:06Z) - An Empirical Analysis of Speech Self-Supervised Learning at Multiple Resolutions [0.3495246564946556]
We present an initial analysis of layer-wise representations in multi-scale architectures, with a focus on Canonical Correlation Analysis (CCA) and Mutual Information (MI)
We find that the improved performance on SUPERB tasks is primarily due to the auxiliary low-resolution loss rather than the downsampling itself.
These findings challenge assumptions about the multi-scale nature of MR-HuBERT and motivate the importance of disentangling computational efficiency from learning better representations.
arXiv Detail & Related papers (2024-10-31T14:09:05Z) - Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought [61.588465852846646]
Chain-of-Thought (CoT) reasoning has emerged as a promising approach for enhancing the performance of large language models (LLMs)
In this work, we introduce a novel reasoning boundary framework (RBF) to address these challenges.
arXiv Detail & Related papers (2024-10-08T05:26:28Z) - Think-on-Graph 2.0: Deep and Faithful Large Language Model Reasoning with Knowledge-guided Retrieval Augmented Generation [14.448198170932226]
Think-on-Graph 2.0 (ToG-2) is a hybrid RAG framework that iteratively retrieves information from both unstructured and structured knowledge sources.<n>ToG-2 alternates between graph retrieval and context retrieval to search for in-depth clues relevant to the question.<n>It achieves overall state-of-the-art (SOTA) performance on 6 out of 7 knowledge-intensive datasets with GPT-3.5.
arXiv Detail & Related papers (2024-07-15T15:20:40Z) - Exploring & Exploiting High-Order Graph Structure for Sparse Knowledge
Graph Completion [20.45256490854869]
We present a novel framework, LR-GCN, that is able to automatically capture valuable long-range dependency among entities.
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
arXiv Detail & Related papers (2023-06-29T15:35:34Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.