From Chains to Graphs: Self-Structured Reasoning for General-Domain LLMs
- URL: http://arxiv.org/abs/2601.03597v1
- Date: Wed, 07 Jan 2026 05:27:41 GMT
- Title: From Chains to Graphs: Self-Structured Reasoning for General-Domain LLMs
- Authors: Yingjian Chen, Haoran Liu, Yinhong Liu, Sherry T. Tong, Aosong Feng, Jinghui Lu, Juntao Zhang, Yusuke Iwasawa, Yutaka Matsuo, Irene Li,
- Abstract summary: Real-world reasoning requires integrating multiple premises and solving subproblems in parallel.<n>Recent approaches rely on externally provided graphs and do not explore how LLMs can construct and use their own graph-structured reasoning.<n>We propose Self-Graph Reasoning (SGR), a framework that enables LLMs to explicitly represent their reasoning process as a structured graph before producing the final answer.
- Score: 52.0811904328273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) show strong reasoning ability in open-domain question answering, yet their reasoning processes are typically linear and often logically inconsistent. In contrast, real-world reasoning requires integrating multiple premises and solving subproblems in parallel. Existing methods, such as Chain-of-Thought (CoT), express reasoning in a linear textual form, which may appear coherent but frequently leads to inconsistent conclusions. Recent approaches rely on externally provided graphs and do not explore how LLMs can construct and use their own graph-structured reasoning, particularly in open-domain QA. To fill this gap, we novelly explore graph-structured reasoning of LLMs in general-domain question answering. We propose Self-Graph Reasoning (SGR), a framework that enables LLMs to explicitly represent their reasoning process as a structured graph before producing the final answer. We further construct a graph-structured reasoning dataset that merges multiple candidate reasoning graphs into refined graph structures for model training. Experiments on five QA benchmarks across both general and specialized domains show that SGR consistently improves reasoning consistency and yields a 17.74% gain over the base model. The LLaMA-3.3-70B model fine-tuned with SGR performs comparably to GPT-4o and surpasses Claude-3.5-Haiku, demonstrating the effectiveness of graph-structured reasoning.
Related papers
- From Chains to DAGs: Probing the Graph Structure of Reasoning in LLMs [14.863369207533966]
We introduce Reasoning DAG Probing, a framework that asks whether reasoning DAG geometry is encoded in model internals.<n>We use these probes to analyze the layerwise emergence of DAG structure and evaluate controls that disrupt reasoning-relevant structure.<n>Our results provide evidence that reasoning DAG geometry is meaningfully encoded in intermediate layers, with recoverability varying systematically by node depth and model scale.
arXiv Detail & Related papers (2026-01-24T21:11:51Z) - G-reasoner: Foundation Models for Unified Reasoning over Graph-structured Knowledge [88.82814893945077]
Large language models (LLMs) excel at complex reasoning but remain limited by static and incomplete parametric knowledge.<n>Recent graph-enhanced RAG (GraphRAG) attempts to bridge this gap by constructing tailored graphs and enabling LLMs to reason on them.<n>G-reasoner is a unified framework that integrates graph and language foundation models for reasoning over diverse graph-structured knowledge.
arXiv Detail & Related papers (2025-09-29T04:38:12Z) - Rethinking and Benchmarking Large Language Models for Graph Reasoning [36.30471027175558]
Large Language Models (LLMs) for Graph Reasoning have been extensively studied over the past two years.<n>Recent studies underscore the potential of LLMs in handling graph reasoning tasks, but their performance is underwhelming.
arXiv Detail & Related papers (2025-09-29T04:10:12Z) - GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models [59.72897499248909]
We propose a novel graph retriever trained end-to-end with Large Language Models (LLMs)<n>Within the extracted subgraph, structural knowledge and semantic features are encoded via soft tokens and the verbalized graph, respectively, which are infused into the LLM together.<n>Our approach consistently achieves state-of-the-art performance, validating the strength of joint graph-LLM optimization for complex reasoning tasks.
arXiv Detail & Related papers (2025-09-20T02:38:00Z) - You Don't Need Pre-built Graphs for RAG: Retrieval Augmented Generation with Adaptive Reasoning Structures [16.867592142212203]
Large language models (LLMs) often suffer from hallucination, generating factually incorrect statements when handling questions beyond their knowledge.<n>Retrieval-augmented generation (RAG) addresses this by retrieving query-relevant contexts from knowledge bases to support LLM reasoning.<n>Existing Graph-based RAG methods rely on a costly process to transform the corpus into a graph, introducing overwhelming token cost and update latency.<n>We propose LogicRAG that dynamically extracts reasoning structures at inference time to guide adaptive retrieval without any pre-built graph.
arXiv Detail & Related papers (2025-08-08T08:07:40Z) - KisMATH: Do LLMs Have Knowledge of Implicit Structures in Mathematical Reasoning? [4.473915603131591]
Chain-of-thought traces have been shown to improve performance of large language models in a plethora of reasoning tasks.<n>We introduce Causal CoT Graphs (CCGs), which are directed acyclic graphs automatically extracted from reasoning traces.
arXiv Detail & Related papers (2025-07-15T15:28:37Z) - GraphRAG-Bench: Challenging Domain-Specific Reasoning for Evaluating Graph Retrieval-Augmented Generation [26.654064783342545]
Graph Retrieval Augmented Generation (GraphRAG) has garnered increasing recognition for its potential to enhance large language models (LLMs)<n>Current evaluations of GraphRAG models predominantly rely on traditional question-answering datasets.<n>We introduce GraphRAG-Bench, a large-scale, domain-specific benchmark designed to rigorously evaluate GraphRAG models.
arXiv Detail & Related papers (2025-06-03T03:44:26Z) - Reasoning with Graphs: Structuring Implicit Knowledge to Enhance LLMs Reasoning [73.2950349728376]
Large language models (LLMs) have demonstrated remarkable success across a wide range of tasks.<n>However, they still encounter challenges in reasoning tasks that require understanding and inferring relationships between pieces of information.<n>This challenge is particularly pronounced in tasks involving multi-step processes, such as logical reasoning and multi-hop question answering.<n>We propose Reasoning with Graphs (RwG) by first constructing explicit graphs from the context.
arXiv Detail & Related papers (2025-01-14T05:18:20Z) - GRS-QA -- Graph Reasoning-Structured Question Answering Dataset [50.223851616680754]
We introduce the Graph Reasoning-Structured Question Answering dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs.
Unlike existing M-QA datasets, GRS-QA explicitly captures intricate reasoning pathways by constructing reasoning graphs.
Our empirical analysis reveals that LLMs perform differently when handling questions with varying reasoning structures.
arXiv Detail & Related papers (2024-11-01T05:14:03Z) - Can Graph Descriptive Order Affect Solving Graph Problems with LLMs? [55.5662721046769]
Large language models (LLMs) have achieved significant success in reasoning tasks, including mathematical reasoning and logical deduction.<n>Previous studies have explored LLMs' graph reasoning abilities through various techniques.<n>A critical factor has been mostly overlooked: the prompt sequential order in which graph descriptions are presented to the models.
arXiv Detail & Related papers (2024-02-11T09:46:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.