Think Parallax: Solving Multi-Hop Problems via Multi-View Knowledge-Graph-Based Retrieval-Augmented Generation
- URL: http://arxiv.org/abs/2510.15552v1
- Date: Fri, 17 Oct 2025 11:34:27 GMT
- Title: Think Parallax: Solving Multi-Hop Problems via Multi-View Knowledge-Graph-Based Retrieval-Augmented Generation
- Authors: Jinliang Liu,
- Abstract summary: Large language models (LLMs) excel at language understanding but often hallucinate and struggle with multi-hop reasoning.<n>We propose ParallaxRAG, a framework that symmetrically decouples queries and graph triples into multi-view spaces.<n>Our results highlight multi-view head specialization as a principled direction for knowledge-grounded multi-hop reasoning.
- Score: 2.8890464940342873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) excel at language understanding but often hallucinate and struggle with multi-hop reasoning. Knowledge-graph-based retrieval-augmented generation (KG-RAG) offers grounding, yet most methods rely on flat embeddings and noisy path exploration. We propose ParallaxRAG, a framework that symmetrically decouples queries and graph triples into multi-view spaces, enabling a robust retrieval architecture that explicitly enforces head diversity while constraining weakly related paths. Central to our approach is the observation that different attention heads specialize in semantic relations at distinct reasoning stages, contributing to different hops of the reasoning chain. This specialization allows ParallaxRAG to construct cleaner subgraphs and guide LLMs through grounded, step-wise reasoning. Experiments on WebQSP and CWQ, under our unified, reproducible setup (BGE-M3 + Llama3.1-8B), demonstrate competitive retrieval and QA performance, alongside reduced hallucination and good generalization. Our results highlight multi-view head specialization as a principled direction for knowledge-grounded multi-hop reasoning. Our implementation will be released as soon as the paper is accepted.
Related papers
- ProgRAG: Hallucination-Resistant Progressive Retrieval and Reasoning over Knowledge Graphs [2.9539912037183362]
Large Language Models (LLMs) demonstrate strong reasoning capabilities but struggle with hallucinations and limited transparency.<n>We propose ProgRAG, a multi-hop knowledge graph question answering (KGQA) framework that decomposes complex questions into sub-questions and extends partial reasoning paths.<n> Experiments on three well-known datasets demonstrate that ProgRAG outperforms existing baselines in multi-hop KGQA.
arXiv Detail & Related papers (2025-11-13T12:14:36Z) - Thinker: Training LLMs in Hierarchical Thinking for Deep Search via Multi-Turn Interaction [57.67217258741752]
Thinker is a hierarchical thinking model for deep search through multi-turn interaction.<n>It decomposes complex problems into independently solvable sub-problems.<n> dependencies between sub-problems are passed as parameters via these logical functions.
arXiv Detail & Related papers (2025-11-11T07:48:45Z) - GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models [59.72897499248909]
We propose a novel graph retriever trained end-to-end with Large Language Models (LLMs)<n>Within the extracted subgraph, structural knowledge and semantic features are encoded via soft tokens and the verbalized graph, respectively, which are infused into the LLM together.<n>Our approach consistently achieves state-of-the-art performance, validating the strength of joint graph-LLM optimization for complex reasoning tasks.
arXiv Detail & Related papers (2025-09-20T02:38:00Z) - Towards Agentic RAG with Deep Reasoning: A Survey of RAG-Reasoning Systems in LLMs [69.10441885629787]
Retrieval-Augmented Generation (RAG) lifts the factuality of Large Language Models (LLMs) by injecting external knowledge.<n>It falls short on problems that demand multi-step inference; conversely, purely reasoning-oriented approaches often hallucinate or mis-ground facts.<n>This survey synthesizes both strands under a unified reasoning-retrieval perspective.
arXiv Detail & Related papers (2025-07-13T03:29:41Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - PropRAG: Guiding Retrieval with Beam Search over Proposition Paths [15.346744525284604]
PropRAG is a novel RAG framework that shifts from triples to context-rich propositions.<n>PropRAG achieves Recall@5 and F1 scores on 2Wiki, HotpotQA, and MuSiQue.
arXiv Detail & Related papers (2025-04-25T04:47:34Z) - HopRAG: Multi-Hop Reasoning for Logic-Aware Retrieval-Augmented Generation [28.69822159828129]
We propose textbfHopRAG, a novel RAG framework that augments retrieval with logical reasoning through graph-structured knowledge exploration.<n>During indexing, HopRAG constructs a passage graph, with text chunks as vertices and logical connections established via LLM-generated pseudo-queries as edges.<n>During retrieval, it employs a textitretrieve-reason-prune mechanism: starting with lexically or semantically similar passages, the system explores multi-hop neighbors guided by pseudo-queries and LLM reasoning to identify truly relevant ones.
arXiv Detail & Related papers (2025-02-18T02:24:42Z) - PathFinder: Guided Search over Multi-Step Reasoning Paths [80.56102301441899]
We propose PathFinder, a tree-search-based reasoning path generation approach.
It enhances diverse branching and multi-hop reasoning through the integration of dynamic decoding.
Our model generalizes well to longer, unseen reasoning chains, reflecting similar complexities to beam search with large branching factors.
arXiv Detail & Related papers (2023-12-08T17:05:47Z) - Dynamic Semantic Graph Construction and Reasoning for Explainable
Multi-hop Science Question Answering [50.546622625151926]
We propose a new framework to exploit more valid facts while obtaining explainability for multi-hop QA.
Our framework contains three new ideas: (a) tt AMR-SG, an AMR-based Semantic Graph, constructed by candidate fact AMRs to uncover any hop relations among question, answer and multiple facts, (b) a novel path-based fact analytics approach exploiting tt AMR-SG to extract active facts from a large fact pool to answer questions, and (c) a fact-level relation modeling leveraging graph convolution network (GCN) to guide the reasoning process.
arXiv Detail & Related papers (2021-05-25T09:14:55Z) - Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question
Answering [35.40919477319811]
We propose a novel knowledge-aware approach that equips pre-trained language models with a multi-hop relational reasoning module.
It performs multi-hop, multi-relational reasoning over subgraphs extracted from external knowledge graphs.
It unifies path-based reasoning methods and graph neural networks to achieve better interpretability and scalability.
arXiv Detail & Related papers (2020-05-01T23:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.