Structure-Augmented Reasoning Generation
- URL: http://arxiv.org/abs/2506.08364v3
- Date: Mon, 11 Aug 2025 03:38:25 GMT
- Title: Structure-Augmented Reasoning Generation
- Authors: Jash Rajesh Parekh, Pengcheng Jiang, Jiawei Han,
- Abstract summary: Retrieval-Augmented Generation (RAG) systems fail at complex multi-hop reasoning because they rely on large language models to implicitly connect information from unstructured document collections.<n>This fundamental limitation stems from treating retrieved passages as independent context rather than recognizing the intricate relationships that enable coherent reasoning chains.<n>We introduce SARG, a post-retrieval framework that transforms traditional RAG pipelines by materializing explicit reasoning structures.
- Score: 23.587337743113228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Retrieval-Augmented Generation (RAG) systems fail at complex multi-hop reasoning because they rely on large language models to implicitly connect information from unstructured document collections. This fundamental limitation stems from treating retrieved passages as independent context rather than recognizing the intricate relationships that enable coherent reasoning chains. We introduce SARG (Structure-Augmented Reasoning Generation), a post-retrieval framework that transforms traditional RAG pipelines by materializing explicit reasoning structures. SARG extracts {cause, relation, effect} triples from retrieved documents, constructs domain-adaptive graphs, and performs multi-hop traversal to discover reasoning chains that bridge query concepts to answers. Unlike existing approaches that modify retrieval mechanisms, SARG operates as a plug-and-play reasoning layer compatible with any RAG system. Extensive evaluation across diverse domains: general QA, biomedical literature, and financial analysis demonstrates that SARG achieves substantial improvements over state-of-the-art RAG baselines. Crucially, SARG also provides full reasoning traceability through explicit inference chains, addressing the critical interpretability gap in current RAG systems. Our results establish that explicit structural reasoning is not merely beneficial but essential for reliable complex question answering, offering a solution to RAG's implicit reasoning bottleneck.
Related papers
- ROG: Retrieval-Augmented LLM Reasoning for Complex First-Order Queries over Knowledge Graphs [14.25887925588904]
We propose a retrieval-augmented framework that combines query-aware neighborhood retrieval with large language model (LLM) chain-of-thought reasoning.<n>ROG decomposes a multi-operator query into a sequence of single-operator sub-queries.<n> Intermediate answer sets are cached and reused across steps, improving consistency on deep reasoning chains.
arXiv Detail & Related papers (2026-02-02T17:45:43Z) - CoT-Seg: Rethinking Segmentation with Chain-of-Thought Reasoning and Self-Correction [50.67483317563736]
This paper aims to explore a system that can think step-by-step, look up information if needed, generate results, self-evaluate its own results, and refine the results.<n>We introduce CoT-Seg, a training-free framework that rethinks reasoning segmentation by combining chain-of-thought reasoning with self-correction.
arXiv Detail & Related papers (2026-01-24T11:41:54Z) - PruneRAG: Confidence-Guided Query Decomposition Trees for Efficient Retrieval-Augmented Generation [19.832367438725306]
PruneRAG builds a structured query decomposition tree to perform stable and efficient reasoning.<n>We define the Evidence Forgetting Rate as a metric to quantify cases where golden evidence is retrieved but not correctly used.
arXiv Detail & Related papers (2026-01-16T06:38:17Z) - Multi-hop Reasoning via Early Knowledge Alignment [68.28168992785896]
Early Knowledge Alignment (EKA) aims to align Large Language Models with contextually relevant retrieved knowledge.<n>EKA significantly improves retrieval precision, reduces cascading errors, and enhances both performance and efficiency.<n>EKA proves effective as a versatile, training-free inference strategy that scales seamlessly to large models.
arXiv Detail & Related papers (2025-12-23T08:14:44Z) - Causal-Counterfactual RAG: The Integration of Causal-Counterfactual Reasoning into RAG [2.3490649790592935]
Large language models (LLMs) have transformed natural language processing (NLP), enabling diverse applications by integrating large-scale pre-trained knowledge.<n>Retrieval-Augmented Generation (RAG) addresses this challenge by combining retrieval mechanisms with generative modeling to improve contextual understanding.<n>We propose Causal-Counterfactual RAG, a novel framework that integrates explicit causal graphs representing cause-effect relationships into the retrieval process and incorporates counterfactual reasoning grounded on the causal structure.
arXiv Detail & Related papers (2025-09-17T21:18:47Z) - LAG: Logic-Augmented Generation from a Cartesian Perspective [7.2022636966543745]
This paper introduces Logic-Augmented Generation (LAG), a novel paradigm that reframes knowledge augmentation through systematic question decomposition and dependency-aware reasoning.<n>Experiments on four benchmark datasets demonstrate that LAG significantly enhances reasoning robustness, reduces hallucination, and aligns LLM problem-solving with human cognition.
arXiv Detail & Related papers (2025-08-07T15:42:00Z) - Towards Agentic RAG with Deep Reasoning: A Survey of RAG-Reasoning Systems in LLMs [69.10441885629787]
Retrieval-Augmented Generation (RAG) lifts the factuality of Large Language Models (LLMs) by injecting external knowledge.<n>It falls short on problems that demand multi-step inference; conversely, purely reasoning-oriented approaches often hallucinate or mis-ground facts.<n>This survey synthesizes both strands under a unified reasoning-retrieval perspective.
arXiv Detail & Related papers (2025-07-13T03:29:41Z) - Inference Scaled GraphRAG: Improving Multi Hop Question Answering on Knowledge Graphs [15.036480111358369]
Large Language Models (LLMs) have achieved impressive capabilities in language understanding and generation.<n>They continue to underperform on knowledge-intensive reasoning tasks due to limited access to structured context and multi-hop information.<n>We introduce Inference-Scaled GraphRAG, a novel framework that enhances LLM-based graph reasoning by applying inference-time compute scaling.
arXiv Detail & Related papers (2025-06-24T19:31:03Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - Reasoning with RAGged events: RAG-Enhanced Event Knowledge Base Construction and reasoning with proof-assistants [0.9790236766474201]
This paper develops automatic historical event extraction models using multiple LLMs.<n>We conduct evaluations using historical texts from Thucydides.<n>We develop an automated translation pipeline converting extracted RDF representations into Coq proof assistant specifications.
arXiv Detail & Related papers (2025-06-08T08:36:14Z) - Graph-based RAG Enhancement via Global Query Disambiguation and Dependency-Aware Reranking [9.280502741892676]
PankRAG is a globally aware, hierarchical query-resolution strategy with a novel dependency-aware reranking mechanism.<n>It applies its dependency-aware reranker to exploit the dependency structure among resolved sub-questions.<n>PankRAG consistently outperforms state-of-the-art approaches across multiple benchmarks.
arXiv Detail & Related papers (2025-06-07T07:17:14Z) - KAQG: A Knowledge-Graph-Enhanced RAG for Difficulty-Controlled Question Generation [0.0]
KAQG introduces a decisive breakthrough for Retrieval-Augmented Generation (RAG)<n>It tackles the two chronic weaknesses of current pipelines: transparent multi-step reasoning and fine-grained cognitive difficulty control.<n>Technically, the framework fuses knowledge graphs, RAG retrieval, and educational assessment theory into a single pipeline.
arXiv Detail & Related papers (2025-05-12T14:42:19Z) - AlignRAG: Leveraging Critique Learning for Evidence-Sensitive Retrieval-Augmented Reasoning [61.28113271728859]
RAG has become a widely adopted paradigm for enabling knowledge-grounded large language models (LLMs)<n>Standard RAG pipelines often fail to ensure that model reasoning remains consistent with the evidence retrieved, leading to factual inconsistencies or unsupported conclusions.<n>In this work, we reinterpret RAG as Retrieval-Augmented Reasoning and identify a central but underexplored problem: textitReasoning Misalignment.
arXiv Detail & Related papers (2025-04-21T04:56:47Z) - CDF-RAG: Causal Dynamic Feedback for Adaptive Retrieval-Augmented Generation [3.8808821719659763]
We introduce Causal Dynamic Feedback for Adaptive Retrieval-Augmented Generation (CDF-RAG)<n>CDF-RAG iteratively refines queries, retrieves structured causal graphs, and enables multi-hop causal reasoning across interconnected knowledge sources.<n>We evaluate CDF-RAG on four diverse datasets, demonstrating its ability to improve response accuracy and causal correctness over existing RAG-based methods.
arXiv Detail & Related papers (2025-04-17T01:15:13Z) - Improving Multilingual Retrieval-Augmented Language Models through Dialectic Reasoning Argumentations [65.11348389219887]
We introduce Dialectic-RAG (DRAG), a modular approach that evaluates retrieved information by comparing, contrasting, and resolving conflicting perspectives.<n>We show the impact of our framework both as an in-context learning strategy and for constructing demonstrations to instruct smaller models.
arXiv Detail & Related papers (2025-04-07T06:55:15Z) - CausalRAG: Integrating Causal Graphs into Retrieval-Augmented Generation [11.265999775635823]
CausalRAG is a novel framework that incorporates causal graphs into the retrieval process.<n>By constructing and tracing causal relationships, CausalRAG preserves contextual continuity and improves retrieval precision.<n>Our findings suggest that grounding retrieval in causal reasoning provides a promising approach to knowledge-intensive tasks.
arXiv Detail & Related papers (2025-03-25T17:43:08Z) - Chain-of-Retrieval Augmented Generation [72.06205327186069]
This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer.<n>Our proposed method, CoRAG, allows the model to dynamically reformulate the query based on the evolving state.
arXiv Detail & Related papers (2025-01-24T09:12:52Z) - GRS-QA -- Graph Reasoning-Structured Question Answering Dataset [50.223851616680754]
We introduce the Graph Reasoning-Structured Question Answering dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs.
Unlike existing M-QA datasets, GRS-QA explicitly captures intricate reasoning pathways by constructing reasoning graphs.
Our empirical analysis reveals that LLMs perform differently when handling questions with varying reasoning structures.
arXiv Detail & Related papers (2024-11-01T05:14:03Z) - Causality is all you need [63.10680366545293]
Causal Graph Routing (CGR) is an integrated causal scheme relying entirely on the intervention mechanisms to reveal the cause-effect forces hidden in data.
CGR can surpass the current state-of-the-art methods on both Visual Question Answer and Long Document Classification tasks.
arXiv Detail & Related papers (2023-11-21T02:53:40Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.