Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition
- URL: http://arxiv.org/abs/2503.24110v1
- Date: Mon, 31 Mar 2025 14:01:39 GMT
- Title: Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition
- Authors: François Olivier, Zied Bouraoui,
- Abstract summary: We propose a novel framework that bridges embodied cognition theory and agent systems.<n>We will be able to create a neurosymbolic system that grounds the agent's understanding in fundamental conceptual structures.
- Score: 12.269231280154482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite advances in embodied AI, agent reasoning systems still struggle to capture the fundamental conceptual structures that humans naturally use to understand and interact with their environment. To address this, we propose a novel framework that bridges embodied cognition theory and agent systems by leveraging a formal characterization of image schemas, which are defined as recurring patterns of sensorimotor experience that structure human cognition. By customizing LLMs to translate natural language descriptions into formal representations based on these sensorimotor patterns, we will be able to create a neurosymbolic system that grounds the agent's understanding in fundamental conceptual structures. We argue that such an approach enhances both efficiency and interpretability while enabling more intuitive human-agent interactions through shared embodied understanding.
Related papers
- Elucidation of the Concept of Consciousness from the Theory of Non-Human Communication Agents [0.0]
This article focuses on elucidating the concept of consciousness from a relational and post-phenomenological theory of non-human communication agents (ANHC)<n>Building on interactions with non-human cognitive agents, among other factors, the explainability of sociotechnical systems challenges the common sense of modern philosophy and science.<n>The aim is to contribute to a necessary discussion for designing new frameworks of understanding that pave the way toward an ethical and pragmatic approach to addressing contemporary challenges in the design, regulation, and interaction with ANHC.
arXiv Detail & Related papers (2025-02-05T13:58:23Z) - Human-like conceptual representations emerge from language prediction [72.5875173689788]
Large language models (LLMs) trained exclusively through next-token prediction over language data exhibit remarkably human-like behaviors.<n>Are these models developing concepts akin to humans, and if so, how are such concepts represented and organized?<n>Our results demonstrate that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.<n>These findings establish that structured, human-like conceptual representations can naturally emerge from language prediction without real-world grounding.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Neurosymbolic Graph Enrichment for Grounded World Models [47.92947508449361]
We present a novel approach to enhance and exploit LLM reactive capability to address complex problems.
We create a multimodal, knowledge-augmented formal representation of meaning that combines the strengths of large language models with structured semantic representations.
By bridging the gap between unstructured language models and formal semantic structures, our method opens new avenues for tackling intricate problems in natural language understanding and reasoning.
arXiv Detail & Related papers (2024-11-19T17:23:55Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Deep Interpretable Models of Theory of Mind For Human-Agent Teaming [0.7734726150561086]
We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
arXiv Detail & Related papers (2021-04-07T06:18:58Z) - On the Binding Problem in Artificial Neural Networks [12.04468744445707]
We argue that the underlying cause for this shortcoming is their inability to dynamically and flexibly bind information.
We propose a unifying framework that revolves around forming meaningful entities from unstructured sensory inputs.
We believe that a compositional approach to AI, in terms of grounded symbol-like representations, is of fundamental importance for realizing human-level generalization.
arXiv Detail & Related papers (2020-12-09T18:02:49Z) - The Evolution of Concept-Acquisition based on Developmental Psychology [4.416484585765028]
A conceptual system with rich connotation is key to improving the performance of knowledge-based artificial intelligence systems.
Finding a new method to represent concepts and construct a conceptual system will greatly improve the performance of many intelligent systems.
Developmental psychology carefully observes the process of concept acquisition in humans at the behavioral level.
arXiv Detail & Related papers (2020-11-26T01:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.