Artificial Intelligence Software Structured to Simulate Human Working
Memory, Mental Imagery, and Mental Continuity
- URL: http://arxiv.org/abs/2204.05138v1
- Date: Tue, 29 Mar 2022 22:23:36 GMT
- Title: Artificial Intelligence Software Structured to Simulate Human Working
Memory, Mental Imagery, and Mental Continuity
- Authors: Jared Edward Reser
- Abstract summary: This article presents an artificial intelligence architecture intended to simulate the human working memory system.
It features several interconnected neural networks designed to emulate the specialized modules of the cerebral cortex.
As the content stored in working memory gradually evolves, successive states overlap and are continuous with one another.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This article presents an artificial intelligence (AI) architecture intended
to simulate the human working memory system as well as the manner in which it
is updated iteratively. It features several interconnected neural networks
designed to emulate the specialized modules of the cerebral cortex. These are
structured hierarchically and integrated into a global workspace. They are
capable of temporarily maintaining high-level patterns akin to the
psychological items maintained in working memory. This maintenance is made
possible by persistent neural activity in the form of two modalities: sustained
neural firing (resulting in a focus of attention) and synaptic potentiation
(resulting in a short-term store). This persistent activity is updated
iteratively resulting in incremental changes to the content of the working
memory system. As the content stored in working memory gradually evolves,
successive states overlap and are continuous with one another. The present
article will explore how this architecture can lead to gradual shift in the
distribution of coactive representations, ultimately leading to mental
continuity between processing states, and thus to human-like cognition.
Related papers
- Hierarchical Working Memory and a New Magic Number [1.024113475677323]
We propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory.
Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition.
arXiv Detail & Related papers (2024-08-14T16:03:47Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - A Cognitive Architecture for Machine Consciousness and Artificial Superintelligence: Thought Is Structured by the Iterative Updating of Working Memory [0.0]
This article provides an analytical framework for how to simulate human-like thought processes within a computer.
It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought.
arXiv Detail & Related papers (2022-03-29T22:28:30Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Learning Neuro-Symbolic Relational Transition Models for Bilevel
Planning [61.37385221479233]
In this work, we take a step toward bridging the gap between model-based reinforcement learning and integrated symbolic-geometric robotic planning.
NSRTs have both symbolic and neural components, enabling a bilevel planning scheme where symbolic AI planning in an outer loop guides continuous planning with neural models in an inner loop.
NSRTs can be learned after only tens or hundreds of training episodes, and then used for fast planning in new tasks that require up to 60 actions to reach the goal.
arXiv Detail & Related papers (2021-05-28T19:37:18Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Self-Constructing Neural Networks Through Random Mutation [0.0]
This paper presents a simple method for learning neural architecture through random mutation.
It demonstrates 1) neural architecture may be learned during the agent's lifetime, 2) neural architecture may be constructed over a single lifetime without any initial connections or neurons, and 3) architectural modifications enable rapid adaptation to dynamic and novel task scenarios.
arXiv Detail & Related papers (2021-03-29T15:27:38Z) - Slow manifolds in recurrent networks encode working memory efficiently
and robustly [0.0]
Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time.
We use a top-down modeling approach to examine network-level mechanisms of working memory.
arXiv Detail & Related papers (2021-01-08T18:47:02Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.