AIGenC: An AI generalisation model via creativity
- URL: http://arxiv.org/abs/2205.09738v5
- Date: Wed, 21 Jun 2023 00:58:12 GMT
- Title: AIGenC: An AI generalisation model via creativity
- Authors: Corina Catarau-Cotutiu, Esther Mondragon, Eduardo Alonso
- Abstract summary: Inspired by cognitive theories of creativity, this paper introduces a computational model (AIGenC)
It lays down the necessary components to enable artificial agents to learn, use and generate transferable representations.
We discuss the model's capability to yield better out-of-distribution generalisation in artificial agents.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by cognitive theories of creativity, this paper introduces a
computational model (AIGenC) that lays down the necessary components to enable
artificial agents to learn, use and generate transferable representations.
Unlike machine representation learning, which relies exclusively on raw sensory
data, biological representations incorporate relational and associative
information that embeds rich and structured concept spaces. The AIGenC model
poses a hierarchical graph architecture with various levels and types of
representations procured by different components. The first component, Concept
Processing, extracts objects and affordances from sensory input and encodes
them into a concept space. The resulting representations are stored in a dual
memory system and enriched with goal-directed and temporal information acquired
through reinforcement learning, creating a higher-level of abstraction. Two
additional components work in parallel to detect and recover relevant concepts
and create new ones, respectively, in a process akin to cognitive Reflective
Reasoning and Blending. The Reflective Reasoning unit detects and recovers from
memory concepts relevant to the task by means of a matching process that
calculates a similarity value between the current state and memory graph
structures. Once the matching interaction ends, rewards and temporal
information are added to the graph, building further abstractions. If the
reflective reasoning processing fails to offer a suitable solution, a blending
operation comes into place, creating new concepts by combining past
information. We discuss the model's capability to yield better
out-of-distribution generalisation in artificial agents, thus advancing toward
Artificial General Intelligence.
Related papers
- Augmented Commonsense Knowledge for Remote Object Grounding [67.30864498454805]
We propose an augmented commonsense knowledge model (ACK) to leverage commonsense information as atemporal knowledge graph for improving agent navigation.
ACK consists of knowledge graph-aware cross-modal and concept aggregation modules to enhance visual representation and visual-textual data alignment.
We add a new pipeline for the commonsense-based decision-making process which leads to more accurate local action prediction.
arXiv Detail & Related papers (2024-06-03T12:12:33Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Framework for Inference Inspired by Human Memory Mechanisms [9.408704431898279]
We propose a PMI framework that consists of perception, memory and inference components.
The memory module comprises working and long-term memory, with the latter endowed with a higher-order structure to retain extensive and complex relational knowledge and experience.
We apply our PMI to improve prevailing Transformers and CNN models on question-answering tasks like bAbI-20k and Sort-of-CLEVR datasets.
arXiv Detail & Related papers (2023-10-01T08:12:55Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Analogical Concept Memory for Architectures Implementing the Common
Model of Cognition [1.9417302920173825]
We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories.
We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts.
arXiv Detail & Related papers (2022-10-21T04:39:07Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - A Cognitive Architecture for Machine Consciousness and Artificial
Superintelligence: Thought Is Structured by the Iterative Updating of Working
Memory [0.0]
This article provides an analytical framework for how to simulate human-like thought processes within a computer.
It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought.
arXiv Detail & Related papers (2022-03-29T22:28:30Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Characterizing an Analogical Concept Memory for Architectures
Implementing the Common Model of Cognition [1.468003557277553]
We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories.
We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts.
arXiv Detail & Related papers (2020-06-02T21:54:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.