Mixture of Length and Pruning Experts for Knowledge Graphs Reasoning
- URL: http://arxiv.org/abs/2507.20498v1
- Date: Mon, 28 Jul 2025 03:30:28 GMT
- Title: Mixture of Length and Pruning Experts for Knowledge Graphs Reasoning
- Authors: Enjun Du, Siyi Liu, Yongqi Zhang,
- Abstract summary: We propose textbfMoKGR, a mixture-of-experts framework that personalizes path exploration.<n>MoKGR demonstrates superior performance in both transductive and inductive settings.
- Score: 9.894106590443714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge Graph (KG) reasoning, which aims to infer new facts from structured knowledge repositories, plays a vital role in Natural Language Processing (NLP) systems. Its effectiveness critically depends on constructing informative and contextually relevant reasoning paths. However, existing graph neural networks (GNNs) often adopt rigid, query-agnostic path-exploration strategies, limiting their ability to adapt to diverse linguistic contexts and semantic nuances. To address these limitations, we propose \textbf{MoKGR}, a mixture-of-experts framework that personalizes path exploration through two complementary components: (1) a mixture of length experts that adaptively selects and weights candidate path lengths according to query complexity, providing query-specific reasoning depth; and (2) a mixture of pruning experts that evaluates candidate paths from a complementary perspective, retaining the most informative paths for each query. Through comprehensive experiments on diverse benchmark, MoKGR demonstrates superior performance in both transductive and inductive settings, validating the effectiveness of personalized path exploration in KGs reasoning.
Related papers
- StruProKGR: A Structural and Probabilistic Framework for Sparse Knowledge Graph Reasoning [68.58655814341996]
Sparse Knowledge Graphs (KGs) are commonly encountered in real-world applications, where knowledge is often incomplete or limited.<n>We propose a Structural and Probabilistic framework named StruProKGR, tailored for efficient and interpretable reasoning on sparse KGs.
arXiv Detail & Related papers (2025-12-14T09:36:58Z) - ProgRAG: Hallucination-Resistant Progressive Retrieval and Reasoning over Knowledge Graphs [2.9539912037183362]
Large Language Models (LLMs) demonstrate strong reasoning capabilities but struggle with hallucinations and limited transparency.<n>We propose ProgRAG, a multi-hop knowledge graph question answering (KGQA) framework that decomposes complex questions into sub-questions and extends partial reasoning paths.<n> Experiments on three well-known datasets demonstrate that ProgRAG outperforms existing baselines in multi-hop KGQA.
arXiv Detail & Related papers (2025-11-13T12:14:36Z) - Grounding Long-Context Reasoning with Contextual Normalization for Retrieval-Augmented Generation [57.97548022208733]
We show that seemingly superficial choices in key-value extraction can induce shifts in accuracy and stability.<n>We introduce Contextual Normalization, a strategy that adaptively standardizes context representations before generation.
arXiv Detail & Related papers (2025-10-15T06:28:25Z) - Guided Navigation in Knowledge-Dense Environments: Structured Semantic Exploration with Guidance Graphs [21.84798899012135]
We propose a novel framework that introduces an intermediate Guidance Graph to bridge unstructured queries and structured knowledge retrieval.<n>The Guidance Graph defines the retrieval space by abstracting the target knowledge' s structure while preserving broader semantic context.<n>Our method achieves superior efficiency and outperforms SOTA, especially on complex tasks.
arXiv Detail & Related papers (2025-08-06T08:47:57Z) - KGRAG-Ex: Explainable Retrieval-Augmented Generation with Knowledge Graph-based Perturbations [2.287415292857565]
Knowledge graphs (KGs) offer a solution by introducing structured, semantically rich representations of entities and their relationships.<n>We present KGRAG-Ex, a RAG system that improves both factual grounding and explainability by leveraging a domain-specific KG.<n>Given a user query, KGRAG-Ex identifies relevant entities and semantic paths in the graph, which are then transformed into pseudo-paragraphs.
arXiv Detail & Related papers (2025-07-11T09:35:13Z) - Reliable Reasoning Path: Distilling Effective Guidance for LLM Reasoning with Knowledge Graphs [14.60537408321632]
Large language models (LLMs) often struggle with knowledge-intensive tasks due to a lack of background knowledge.<n>We propose the RRP framework to mine the knowledge graph.<n>We also introduce a rethinking module that evaluates and refines reasoning paths according to their significance.
arXiv Detail & Related papers (2025-06-12T09:10:32Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - Improving Multilingual Retrieval-Augmented Language Models through Dialectic Reasoning Argumentations [65.11348389219887]
We introduce Dialectic-RAG (DRAG), a modular approach that evaluates retrieved information by comparing, contrasting, and resolving conflicting perspectives.<n>We show the impact of our framework both as an in-context learning strategy and for constructing demonstrations to instruct smaller models.
arXiv Detail & Related papers (2025-04-07T06:55:15Z) - Graph Retrieval-Augmented LLM for Conversational Recommendation Systems [52.35491420330534]
G-CRS (Graph Retrieval-Augmented Large Language Model for Conversational Recommender Systems) is a training-free framework that combines graph retrieval-augmented generation and in-context learning.<n>G-CRS achieves superior recommendation performance compared to existing methods without requiring task-specific training.
arXiv Detail & Related papers (2025-03-09T03:56:22Z) - GIVE: Structured Reasoning of Large Language Models with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning method that merges parametric and non-parametric memories to improve accurate reasoning with minimal external input.<n>GIVE guides the LLM agent to select the most pertinent expert data (observe), engage in query-specific divergent thinking (reflect), and then synthesize this information to produce the final output (speak)
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Query-Enhanced Adaptive Semantic Path Reasoning for Inductive Knowledge Graph Completion [45.9995456784049]
This paper proposes the Query-Enhanced Adaptive Semantic Path Reasoning (QASPR) framework.
QASPR captures both the structural and semantic information of KGs to enhance the inductive KGC task.
experimental results demonstrate that QASPR achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-06-04T11:02:15Z) - Exploring & Exploiting High-Order Graph Structure for Sparse Knowledge
Graph Completion [20.45256490854869]
We present a novel framework, LR-GCN, that is able to automatically capture valuable long-range dependency among entities.
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
arXiv Detail & Related papers (2023-06-29T15:35:34Z) - Dual Semantic Knowledge Composed Multimodal Dialog Systems [114.52730430047589]
We propose a novel multimodal task-oriented dialog system named MDS-S2.
It acquires the context related attribute and relation knowledge from the knowledge base.
We also devise a set of latent query variables to distill the semantic information from the composed response representation.
arXiv Detail & Related papers (2023-05-17T06:33:26Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.