Analogical Concept Memory for Architectures Implementing the Common
Model of Cognition
- URL: http://arxiv.org/abs/2210.11731v1
- Date: Fri, 21 Oct 2022 04:39:07 GMT
- Title: Analogical Concept Memory for Architectures Implementing the Common
Model of Cognition
- Authors: Shiwali Mohan, Matthew Klenk
- Abstract summary: We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories.
We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts.
- Score: 1.9417302920173825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Architectures that implement the Common Model of Cognition - Soar, ACT-R, and
Sigma - have a prominent place in research on cognitive modeling as well as on
designing complex intelligent agents. In this paper, we explore how
computational models of analogical processing can be brought into these
architectures to enable concept acquisition from examples obtained
interactively. We propose a new analogical concept memory for Soar that
augments its current system of declarative long-term memories. We frame the
problem of concept learning as embedded within the larger context of
interactive task learning (ITL) and embodied language processing (ELP). We
demonstrate that the analogical learning methods implemented in the proposed
memory can quickly learn a diverse types of novel concepts that are useful not
only in recognition of a concept in the environment but also in action
selection. Our approach has been instantiated in an implemented cognitive
system AILEEN and evaluated on a simulated robotic domain.
Related papers
- Discovering Conceptual Knowledge with Analytic Ontology Templates for Articulated Objects [42.9186628100765]
We aim to endow machine intelligence with an analogous capability through performing at the conceptual level.
AOT-driven approach yields benefits in three key perspectives.
arXiv Detail & Related papers (2024-09-18T04:53:38Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Conceptual Modeling and Artificial Intelligence: A Systematic Mapping
Study [0.5156484100374059]
In conceptual modeling (CM), humans apply abstraction to represent excerpts of reality for means of understanding and communication, and processing by machines.
Recently, a trend toward intertwining CM and AI emerged.
This systematic mapping study shows how this interdisciplinary research field is structured, which mutual benefits are gained by the intertwining, and future research directions.
arXiv Detail & Related papers (2023-03-12T21:23:46Z) - AIGenC: An AI generalisation model via creativity [1.933681537640272]
Inspired by cognitive theories of creativity, this paper introduces a computational model (AIGenC)
It lays down the necessary components to enable artificial agents to learn, use and generate transferable representations.
We discuss the model's capability to yield better out-of-distribution generalisation in artificial agents.
arXiv Detail & Related papers (2022-05-19T17:43:31Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Characterizing an Analogical Concept Memory for Architectures
Implementing the Common Model of Cognition [1.468003557277553]
We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories.
We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts.
arXiv Detail & Related papers (2020-06-02T21:54:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.