MARTI-4: new model of human brain, considering neocortex and basal
ganglia -- learns to play Atari game by reinforcement learning on a single
CPU
- URL: http://arxiv.org/abs/2209.02387v1
- Date: Thu, 18 Aug 2022 20:23:49 GMT
- Title: MARTI-4: new model of human brain, considering neocortex and basal
ganglia -- learns to play Atari game by reinforcement learning on a single
CPU
- Authors: Igor Pivovarov and Sergey Shumsky
- Abstract summary: We present MARTI - new model of human brain, considering neocortex and basal ganglia.
We introduce a novel surprise feeling mechanism, that significantly improves reinforcement learning process through inner rewards.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Deep Control - new ML architecture of cortico-striatal brain
circuits, which use whole cortical column as a structural element, instead of a
singe neuron. Based on this architecture, we present MARTI - new model of human
brain, considering neocortex and basal ganglia. This model is de-signed to
implement expedient behavior and is capable to learn and achieve goals in
unknown environments. We introduce a novel surprise feeling mechanism, that
significantly improves reinforcement learning process through inner rewards. We
use OpenAI Gym environment to demonstrate MARTI learning on a single CPU just
in several hours.
Related papers
- Orangutan: A Multiscale Brain Emulation-Based Artificial Intelligence Framework for Dynamic Environments [2.8137865669570297]
This paper introduces a novel brain-inspired AI framework, Orangutan.
It simulates the structure and computational mechanisms of biological brains on multiple scales.
I have developed a sensorimotor model that simulates human saccadic eye movements during object observation.
arXiv Detail & Related papers (2024-06-18T01:41:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - A bio-inspired implementation of a sparse-learning spike-based
hippocampus memory model [0.0]
We propose a novel bio-inspired memory model based on the hippocampus.
It can learn memories, recall them from a cue and even forget memories when trying to learn others with the same cue.
This work presents the first hardware implementation of a fully functional bio-inspired spike-based hippocampus memory model.
arXiv Detail & Related papers (2022-06-10T07:48:29Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Brain-inspired Distributed Cognitive Architecture [0.0]
We present a brain-inspired cognitive architecture that incorporates sensory processing, classification, contextual prediction, and emotional tagging.
The research lays the foundations for bio-realistic attention direction and sensory selection, and we believe that it is a key step towards achieving a bio-realistic artificial intelligent system.
arXiv Detail & Related papers (2020-05-18T11:38:32Z) - Learning as Reinforcement: Applying Principles of Neuroscience for More
General Reinforcement Learning Agents [1.0742675209112622]
We implement an architecture founded in principles of experimental neuroscience, by combining computationally efficient abstractions of biological algorithms.
Our approach is inspired by research on spike-timing dependent plasticity, the transition between short and long term memory, and the role of various neurotransmitters in rewarding curiosity.
The Neurons-in-a-Box architecture can learn in a wholly generalizable manner, and demonstrates an efficient way to build and apply representations without explicitly optimizing over a set of criteria or actions.
arXiv Detail & Related papers (2020-04-20T04:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.