A Cognitive Architecture for Machine Consciousness and Artificial
Superintelligence: Thought Is Structured by the Iterative Updating of Working
Memory
- URL: http://arxiv.org/abs/2203.17255v6
- Date: Thu, 14 Dec 2023 04:11:47 GMT
- Title: A Cognitive Architecture for Machine Consciousness and Artificial
Superintelligence: Thought Is Structured by the Iterative Updating of Working
Memory
- Authors: Jared Edward Reser
- Abstract summary: This article provides an analytical framework for how to simulate human-like thought processes within a computer.
It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This article provides an analytical framework for how to simulate human-like
thought processes within a computer. It describes how attention and memory
should be structured, updated, and utilized to search for associative additions
to the stream of thought. The focus is on replicating the dynamics of the
mammalian working memory system, which features two forms of persistent
activity: sustained firing (preserving information on the order of seconds) and
synaptic potentiation (preserving information from minutes to hours). The
article uses a series of over 40 original figures to systematically demonstrate
how the iterative updating of these working memory stores provides functional
structure to behavior, cognition, and consciousness.
In an AI implementation, these two memory stores should be updated
continuously and in an iterative fashion, meaning each state should preserve a
proportion of the coactive representations from the state before it. Thus, the
set of concepts in working memory will evolve gradually and incrementally over
time. This makes each state a revised iteration of the preceding state and
causes successive states to overlap and blend with respect to the information
they contain. Transitions between states happen as persistent activity spreads
activation energy throughout the hierarchical network searching long-term
memory for the most appropriate representation to be added to the global
workspace. The result is a chain of associatively linked intermediate states
capable of advancing toward a solution or goal. Iterative updating is
conceptualized here as an information processing strategy, a model of working
memory, a theory of consciousness, and an algorithm for designing and
programming artificial general intelligence.
Related papers
- Hierarchical Working Memory and a New Magic Number [1.024113475677323]
We propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory.
Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - A Framework for Inference Inspired by Human Memory Mechanisms [9.408704431898279]
We propose a PMI framework that consists of perception, memory and inference components.
The memory module comprises working and long-term memory, with the latter endowed with a higher-order structure to retain extensive and complex relational knowledge and experience.
We apply our PMI to improve prevailing Transformers and CNN models on question-answering tasks like bAbI-20k and Sort-of-CLEVR datasets.
arXiv Detail & Related papers (2023-10-01T08:12:55Z) - AIGenC: An AI generalisation model via creativity [1.933681537640272]
Inspired by cognitive theories of creativity, this paper introduces a computational model (AIGenC)
It lays down the necessary components to enable artificial agents to learn, use and generate transferable representations.
We discuss the model's capability to yield better out-of-distribution generalisation in artificial agents.
arXiv Detail & Related papers (2022-05-19T17:43:31Z) - Pin the Memory: Learning to Generalize Semantic Segmentation [68.367763672095]
We present a novel memory-guided domain generalization method for semantic segmentation based on meta-learning framework.
Our method abstracts the conceptual knowledge of semantic classes into categorical memory which is constant beyond the domains.
arXiv Detail & Related papers (2022-04-07T17:34:01Z) - Artificial Intelligence Software Structured to Simulate Human Working
Memory, Mental Imagery, and Mental Continuity [0.0]
This article presents an artificial intelligence architecture intended to simulate the human working memory system.
It features several interconnected neural networks designed to emulate the specialized modules of the cerebral cortex.
As the content stored in working memory gradually evolves, successive states overlap and are continuous with one another.
arXiv Detail & Related papers (2022-03-29T22:23:36Z) - Temporal Memory Relation Network for Workflow Recognition from Surgical
Video [53.20825496640025]
We propose a novel end-to-end temporal memory relation network (TMNet) for relating long-range and multi-scale temporal patterns.
We have extensively validated our approach on two benchmark surgical video datasets.
arXiv Detail & Related papers (2021-03-30T13:20:26Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Slow manifolds in recurrent networks encode working memory efficiently
and robustly [0.0]
Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time.
We use a top-down modeling approach to examine network-level mechanisms of working memory.
arXiv Detail & Related papers (2021-01-08T18:47:02Z) - Learning to Learn Variational Semantic Memory [132.39737669936125]
We introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.
The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences.
We formulate memory recall as the variational inference of a latent memory variable from addressed contents.
arXiv Detail & Related papers (2020-10-20T15:05:26Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.