The Geometry of Reasoning: Flowing Logics in Representation Space
- URL: http://arxiv.org/abs/2510.09782v1
- Date: Fri, 10 Oct 2025 18:44:00 GMT
- Title: The Geometry of Reasoning: Flowing Logics in Representation Space
- Authors: Yufa Zhou, Yixiao Wang, Xunjian Yin, Shuyan Zhou, Anru R. Zhang,
- Abstract summary: We study how large language models (LLMs) think'' through their representation space.<n>We propose a novel geometric framework that models an LLM's reasoning as flows.
- Score: 27.047532187192278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study how large language models (LLMs) ``think'' through their representation space. We propose a novel geometric framework that models an LLM's reasoning as flows -- embedding trajectories evolving where logic goes. We disentangle logical structure from semantics by employing the same natural deduction propositions with varied semantic carriers, allowing us to test whether LLMs internalize logic beyond surface form. This perspective connects reasoning with geometric quantities such as position, velocity, and curvature, enabling formal analysis in representation and concept spaces. Our theory establishes: (1) LLM reasoning corresponds to smooth flows in representation space, and (2) logical statements act as local controllers of these flows' velocities. Using learned representation proxies, we design controlled experiments to visualize and quantify reasoning flows, providing empirical validation of our theoretical framework. Our work serves as both a conceptual foundation and practical tools for studying reasoning phenomenon, offering a new lens for interpretability and formal analysis of LLMs' behavior.
Related papers
- The Lattice Representation Hypothesis of Large Language Models [18.00499182102749]
We show that linear attribute directions with separating thresholds induce a concept lattice via half-space intersections.<n>This geometry enables symbolic reasoning through geometric meet (intersection) and join (union) operations.
arXiv Detail & Related papers (2026-03-01T18:42:59Z) - Emergent Structured Representations Support Flexible In-Context Inference in Large Language Models [77.98801218316505]
Large language models (LLMs) exhibit emergent behaviors suggestive of human-like reasoning.<n>We investigate the internal processing of LLMs during in-context concept inference.
arXiv Detail & Related papers (2026-02-08T03:14:39Z) - A Syllogistic Probe: Tracing the Evolution of Logic Reasoning in Large Language Models [17.118221176971982]
We explore whether large language models (LLMs) exhibit a similar evolution in the underlying logical framework.<n>Using existential import as a probe, we evaluate syllogism under traditional and modern logic.
arXiv Detail & Related papers (2026-01-24T11:51:52Z) - Implicit Reasoning in Large Language Models: A Comprehensive Survey [67.53966514728383]
Large Language Models (LLMs) have demonstrated strong generalization across a wide range of tasks.<n>Recent studies have shifted attention from explicit chain-of-thought prompting toward implicit reasoning.<n>This survey introduces a taxonomy centered on execution paradigms, shifting the focus from representational forms to computational strategies.
arXiv Detail & Related papers (2025-09-02T14:16:02Z) - Sound and Complete Neurosymbolic Reasoning with LLM-Grounded Interpretations [7.81820080453498]
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation.<n>We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic.
arXiv Detail & Related papers (2025-07-13T19:05:43Z) - LLMs model how humans induce logically structured rules [18.202446211409548]
A long-standing debate concerns the adequacy of artificial neural networks as computational models.<n>Recent advances in neural networks -- specifically, the advent of large language models (LLMs) -- represent an important shift in this debate.<n>We argue that LLMs may instantiate a novel theoretical account of the primitive representations and computations necessary to explain human logical concepts.
arXiv Detail & Related papers (2025-07-05T03:24:18Z) - How do Transformers Learn Implicit Reasoning? [67.02072851088637]
We study how implicit multi-hop reasoning emerges by training transformers from scratch in a controlled symbolic environment.<n>We find that training with atomic triples is not necessary but accelerates learning, and that second-hop generalization relies on query-level exposure to specific compositional structures.
arXiv Detail & Related papers (2025-05-29T17:02:49Z) - LogiDynamics: Unraveling the Dynamics of Inductive, Abductive and Deductive Logical Inferences in LLM Reasoning [74.0242521818214]
This paper systematically investigates the comparative dynamics of inductive (System 1) versus abductive/deductive (System 2) inference in large language models (LLMs)<n>We utilize a controlled analogical reasoning environment, varying modality (textual, visual, symbolic), difficulty, and task format (MCQ / free-text)<n>Our analysis reveals System 2 pipelines generally excel, particularly in visual/symbolic modalities and harder tasks, while System 1 is competitive for textual and easier problems.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Toward Conceptual Modeling for Propositional Logic: Propositions as Events [0.0]
This paper reflects on applying propositional logic language to a high-level diagrammatic representation called the thinging machines (TM) model.
The ultimate research objective is a quest for a thorough semantic alignment of TM modeling and propositional logic into a single structure.
arXiv Detail & Related papers (2024-09-24T03:45:24Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Exploring Self-supervised Logic-enhanced Training for Large Language Models [59.227222647741094]
In this paper, we make the first attempt to investigate the feasibility of incorporating logical knowledge through self-supervised post-training.
We devise an auto-regressive objective variant of MERIt and integrate it with two LLM series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to 13 billion.
The results on two challenging logical reasoning benchmarks demonstrate the effectiveness of LogicLLM.
arXiv Detail & Related papers (2023-05-23T06:13:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.