Towards a Neurosymbolic Reasoning System Grounded in Schematic Representations
- URL: http://arxiv.org/abs/2509.03644v1
- Date: Wed, 03 Sep 2025 18:50:18 GMT
- Title: Towards a Neurosymbolic Reasoning System Grounded in Schematic Representations
- Authors: François Olivier, Zied Bouraoui,
- Abstract summary: Large Language Models (LLMs) are error-prone when performing logical reasoning.<n>We introduce a prototype neurosymbolic system, Embodied-LM, that grounds understanding and logical reasoning in schematic representations.
- Score: 12.910444755119082
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite significant progress in natural language understanding, Large Language Models (LLMs) remain error-prone when performing logical reasoning, often lacking the robust mental representations that enable human-like comprehension. We introduce a prototype neurosymbolic system, Embodied-LM, that grounds understanding and logical reasoning in schematic representations based on image schemas-recurring patterns derived from sensorimotor experience that structure human cognition. Our system operationalizes the spatial foundations of these cognitive structures using declarative spatial reasoning within Answer Set Programming. Through evaluation on logical deduction problems, we demonstrate that LLMs can be guided to interpret scenarios through embodied cognitive structures, that these structures can be formalized as executable programs, and that the resulting representations support effective logical reasoning with enhanced interpretability. While our current implementation focuses on spatial primitives, it establishes the computational foundation for incorporating more complex and dynamic representations.
Related papers
- Emergent Structured Representations Support Flexible In-Context Inference in Large Language Models [77.98801218316505]
Large language models (LLMs) exhibit emergent behaviors suggestive of human-like reasoning.<n>We investigate the internal processing of LLMs during in-context concept inference.
arXiv Detail & Related papers (2026-02-08T03:14:39Z) - Hybrid Models for Natural Language Reasoning: The Case of Syllogistic Logic [3.421904493396495]
We investigate the logical generalization capabilities of pre-trained large language models (LLMs) using the syllogistic fragment as a benchmark.<n>We propose a hybrid architecture integrating symbolic reasoning with neural computation.<n>Our experiments show that high efficiency is preserved even with relatively small neural components.
arXiv Detail & Related papers (2025-10-10T15:27:29Z) - Fundamentals of Building Autonomous LLM Agents [64.39018305018904]
This paper reviews the architecture and implementation methods of agents powered by large language models (LLMs)<n>The research aims to explore patterns to develop "agentic" LLMs that can automate complex tasks and bridge the performance gap with human capabilities.
arXiv Detail & Related papers (2025-10-10T10:32:39Z) - Embryology of a Language Model [1.1874560263468232]
In this work, we introduce an embryological approach, applying UMAP to the susceptibility matrix to visualize the model's structural development over training.<n>Our visualizations reveal the emergence of a clear body plan'' charting the formation of known features like the induction circuit and discovering previously unknown structures.
arXiv Detail & Related papers (2025-08-01T05:39:41Z) - On the Limits of Hierarchically Embedded Logic in Classical Neural Networks [0.0]
We show that each layer can encode at most one additional level of logical reasoning.<n>We prove that a neural network of depth a particular depth cannot faithfully represent predicates in a one higher order logic.
arXiv Detail & Related papers (2025-07-28T16:13:41Z) - Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition [12.269231280154482]
We propose a novel framework that bridges embodied cognition theory and agent systems.<n>We will be able to create a neurosymbolic system that grounds the agent's understanding in fundamental conceptual structures.
arXiv Detail & Related papers (2025-03-31T14:01:39Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [74.0242521818214]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.<n>We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.<n>We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.