ReasoningFlow: Semantic Structure of Complex Reasoning Traces
- URL: http://arxiv.org/abs/2506.02532v1
- Date: Tue, 03 Jun 2025 07:11:34 GMT
- Title: ReasoningFlow: Semantic Structure of Complex Reasoning Traces
- Authors: Jinu Lee, Sagnik Mukherjee, Dilek Hakkani-Tur, Julia Hockenmaier,
- Abstract summary: ReasoningFlow parses traces into directed acyclic graphs, enabling the characterization of distinct reasoning patterns as subgraph structures.<n>This human-interpretable representation offers promising applications in understanding, evaluating, and enhancing the reasoning processes of LRMs.
- Score: 9.328084104525834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large reasoning models (LRMs) generate complex reasoning traces with planning, reflection, verification, and backtracking. In this work, we introduce ReasoningFlow, a unified schema for analyzing the semantic structures of these complex traces. ReasoningFlow parses traces into directed acyclic graphs, enabling the characterization of distinct reasoning patterns as subgraph structures. This human-interpretable representation offers promising applications in understanding, evaluating, and enhancing the reasoning processes of LRMs.
Related papers
- ImpRIF: Stronger Implicit Reasoning Leads to Better Complex Instruction Following [9.844089277557048]
ImpRIF is a method to enhance LLMs' understanding of implicit reasoning instructions.<n>We propose fine-tuning with graph reasoning, and apply reinforcement learning to explicitly train models to reason along the graph.<n>Results show that enhancing implicit reasoning capabilities can significantly improve complex instruction following.
arXiv Detail & Related papers (2026-02-04T07:50:11Z) - Multi-Agent Procedural Graph Extraction with Structural and Logical Refinement [66.51979814832332]
model formulates procedural graph extraction as a multi-round reasoning process with dedicated structural and logical refinement.<n>Experiments demonstrate that model achieves substantial improvements in both structural correctness and logical consistency over strong baselines.
arXiv Detail & Related papers (2026-01-27T04:00:48Z) - LSRIF: Logic-Structured Reinforcement Learning for Instruction Following [56.517329105764475]
We propose a logic-structured training framework LSRIF that explicitly models instruction logic.<n> Experiments show LSRIF brings significant improvements in instruction-following and general reasoning.
arXiv Detail & Related papers (2026-01-10T05:11:38Z) - ReasonGraph: Visualisation of Reasoning Paths [28.906801344540458]
ReasonGraph is a web-based platform for visualizing and analyzing Large Language Models (LLMs) reasoning processes.<n>It supports both sequential and tree-based reasoning methods while integrating with major LLM providers and over fifty state-of-the-art models.
arXiv Detail & Related papers (2025-03-06T00:03:55Z) - GRS-QA -- Graph Reasoning-Structured Question Answering Dataset [50.223851616680754]
We introduce the Graph Reasoning-Structured Question Answering dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs.
Unlike existing M-QA datasets, GRS-QA explicitly captures intricate reasoning pathways by constructing reasoning graphs.
Our empirical analysis reveals that LLMs perform differently when handling questions with varying reasoning structures.
arXiv Detail & Related papers (2024-11-01T05:14:03Z) - On the Diagram of Thought [12.304069891580658]
Current large language models (LLMs) demonstrate impressive capabilities but struggle with complex, multi-step reasoning tasks.<n>We introduce the Diagram of Thought (DoT) as a framework wherein a single auto-regressive LLM internally constructs and navigates a Directed Acyclic Graph (DAG)<n>We formalize the reasoning DAG as a diagram within a suitable topos and prove that the final step, aggregating validated information, corresponds semantically to computing the colimit of the relevant sub-diagram.
arXiv Detail & Related papers (2024-09-16T07:01:41Z) - Leveraging Structured Information for Explainable Multi-hop Question
Answering and Reasoning [14.219239732584368]
In this work, we investigate constructing and leveraging extracted semantic structures (graphs) for multi-hop question answering.
Empirical results and human evaluations show that our framework: generates more faithful reasoning chains and substantially improves the QA performance on two benchmark datasets.
arXiv Detail & Related papers (2023-11-07T05:32:39Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Query Structure Modeling for Inductive Logical Reasoning Over Knowledge
Graphs [67.043747188954]
We propose a structure-modeled textual encoding framework for inductive logical reasoning over KGs.
It encodes linearized query structures and entities using pre-trained language models to find answers.
We conduct experiments on two inductive logical reasoning datasets and three transductive datasets.
arXiv Detail & Related papers (2023-05-23T01:25:29Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.