Shapes of Cognition for Computational Cognitive Modeling
- URL: http://arxiv.org/abs/2509.13288v1
- Date: Tue, 16 Sep 2025 17:39:58 GMT
- Title: Shapes of Cognition for Computational Cognitive Modeling
- Authors: Marjorie McShane, Sergei Nirenburg, Sanjay Oruganti, Jesse English,
- Abstract summary: Shapes of cognition is a new conceptual paradigm for the computational cognitive modeling of Language-Endowed Intelligent Agents.<n>Shapes-based modeling involves particular objectives, hypotheses, modeling strategies, knowledge bases, and actual models of wide-ranging phenomena.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Shapes of cognition is a new conceptual paradigm for the computational cognitive modeling of Language-Endowed Intelligent Agents (LEIAs). Shapes are remembered constellations of sensory, linguistic, conceptual, episodic, and procedural knowledge that allow agents to cut through the complexity of real life the same way as people do: by expecting things to be typical, recognizing patterns, acting by habit, reasoning by analogy, satisficing, and generally minimizing cognitive load to the degree situations permit. Atypical outcomes are treated using shapes-based recovery methods, such as learning on the fly, asking a human partner for help, or seeking an actionable, even if imperfect, situational understanding. Although shapes is an umbrella term, it is not vague: shapes-based modeling involves particular objectives, hypotheses, modeling strategies, knowledge bases, and actual models of wide-ranging phenomena, all implemented within a particular cognitive architecture. Such specificity is needed both to vet our hypotheses and to achieve our practical aims of building useful agent systems that are explainable, extensible, and worthy of our trust, even in critical domains. However, although the LEIA example of shapes-based modeling is specific, the principles can be applied more broadly, giving new life to knowledge-based and hybrid AI.
Related papers
- A Geometric Theory of Cognition [0.0]
We present a unified mathematical framework in which diverse cognitive processes emerge from a single geometric principle.<n>We suggest guiding principles for the development of more general and human-like artificial intelligence systems.
arXiv Detail & Related papers (2025-12-13T07:39:53Z) - Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study [50.065744358362345]
Large language models (LLMs) have shown impressive capabilities across tasks such as mathematics, coding, and reasoning.<n>Yet their learning ability, which is crucial for adapting to dynamic environments and acquiring new knowledge, remains underexplored.
arXiv Detail & Related papers (2025-06-16T13:24:50Z) - A New Approach for Knowledge Generation Using Active Inference [0.0]
There are various models proposed on how knowledge is generated in the human brain including the semantic networks model.<n>In this study, based on the free energy principle of the brain, is the researchers proposed a model for generating three types of declarative, procedural, and conditional knowledge.
arXiv Detail & Related papers (2025-01-25T07:06:33Z) - Analogical Concept Memory for Architectures Implementing the Common
Model of Cognition [1.9417302920173825]
We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories.
We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts.
arXiv Detail & Related papers (2022-10-21T04:39:07Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - On the Opportunities and Risks of Foundation Models [256.61956234436553]
We call these models foundation models to underscore their critically central yet incomplete character.
This report provides a thorough account of the opportunities and risks of foundation models.
To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration.
arXiv Detail & Related papers (2021-08-16T17:50:08Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Controlling Synthetic Characters in Simulations: A Case for Cognitive
Architectures and Sigma [0.0]
Simulations require computational models of intelligence that generate realistic and credible behavior for the participating synthetic characters.
Sigma is a cognitive architecture and system that strives to combine what has been learned from four decades of independent work on symbolic cognitive architectures, probabilistic graphical models, and more recently neural models, under its graphical architecture hypothesis.
In this paper, we will introduce Sigma along with its diverse capabilities and then use three distinct proof-of-concept Sigma models to highlight combinations of these capabilities.
arXiv Detail & Related papers (2021-01-06T19:07:36Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z) - Characterizing an Analogical Concept Memory for Architectures
Implementing the Common Model of Cognition [1.468003557277553]
We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories.
We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts.
arXiv Detail & Related papers (2020-06-02T21:54:03Z) - Knowledge Patterns [19.57676317580847]
This paper describes a new technique, called "knowledge patterns", for helping construct axiom-rich, formal Ontology.
Knowledge patterns provide an important insight into the structure of a formal Ontology.
We describe the technique and an application built using them, and then critique their strengths and weaknesses.
arXiv Detail & Related papers (2020-05-08T22:33:30Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.