From Manifestations to Cognitive Architectures: a Scalable Framework
- URL: http://arxiv.org/abs/2406.09823v2
- Date: Mon, 30 Sep 2024 09:05:38 GMT
- Title: From Manifestations to Cognitive Architectures: a Scalable Framework
- Authors: Alfredo Ibias, Guillem Ramirez-Miranda, Enric Guinovart, Eduard Alarcon,
- Abstract summary: We propose a novel way to interpret reality as an information source, that is later translated into a computational framework.
This framework is able to build elements of classical cognitive architectures, like Long Term Memory and Working Memory.
- Score: 2.6563873893593826
- License:
- Abstract: The Artificial Intelligence field is flooded with optimisation methods. In this paper, we change the focus to developing modelling methods with the aim of getting us closer to Artificial General Intelligence. To do so, we propose a novel way to interpret reality as an information source, that is later translated into a computational framework able to capture and represent such information. This framework is able to build elements of classical cognitive architectures, like Long Term Memory and Working Memory, starting from a simple primitive that only processes Spatial Distributed Representations. Moreover, it achieves such level of verticality in a seamless scalable hierarchical way.
Related papers
- Adaptive Large Language Models By Layerwise Attention Shortcuts [46.76681147411957]
LLM-like setups allow the final layer to attend to all of the intermediate layers as it deems fit through the attention mechanism.
We showcase four different datasets, namely acoustic tokens, natural language, and symbolic music, and we achieve superior performance for GPT-like architecture.
arXiv Detail & Related papers (2024-09-17T03:46:01Z) - Knowledge-Aware Parsimony Learning: A Perspective from Relational Graphs [47.6830995661091]
We develop next-generation models in a parsimonious manner, achieving greater potential with simpler models.
The key is to drive models using domain-specific knowledge, such as symbols, logic, and formulas, instead of relying on the scaling law.
This approach allows us to build a framework that uses this knowledge as "building blocks" to achieve parsimony in model design, training, and interpretation.
arXiv Detail & Related papers (2024-06-29T15:52:37Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Analogical Concept Memory for Architectures Implementing the Common
Model of Cognition [1.9417302920173825]
We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories.
We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts.
arXiv Detail & Related papers (2022-10-21T04:39:07Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Characterizing an Analogical Concept Memory for Architectures
Implementing the Common Model of Cognition [1.468003557277553]
We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories.
We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts.
arXiv Detail & Related papers (2020-06-02T21:54:03Z) - Neural Entity Linking: A Survey of Models Based on Deep Learning [82.43751915717225]
This survey presents a comprehensive description of recent neural entity linking (EL) systems developed since 2015.
Its goal is to systemize design features of neural entity linking systems and compare their performance to the remarkable classic methods on common benchmarks.
The survey touches on applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models.
arXiv Detail & Related papers (2020-05-31T18:02:26Z) - New Ideas for Brain Modelling 6 [0.0]
This paper describes implementation details for a 3-level cognitive model.
The whole architecture is now modular, with different levels using different types of information.
The top-level cognitive layer has been re-designed to model the Cognitive Process Language (CPL) of an earlier paper.
arXiv Detail & Related papers (2020-05-11T14:28:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.