From internal models toward metacognitive AI
- URL: http://arxiv.org/abs/2109.12798v1
- Date: Mon, 27 Sep 2021 05:00:56 GMT
- Title: From internal models toward metacognitive AI
- Authors: Mitsuo Kawato (ATR), Aurelio Cortese (ATR/RIKEN)
- Abstract summary: In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" orchestrates conscious involvement of generative-inverse model pairs.
A high responsibility signal is given to the pairs that best capture the external world.
consciousness is determined by the entropy of responsibility signals across all pairs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In several papers published in Biological Cybernetics in the 1980s and 1990s,
Kawato and colleagues proposed computational models explaining how internal
models are acquired in the cerebellum. These models were later supported by
neurophysiological experiments using monkeys and neuroimaging experiments
involving humans. These early studies influenced neuroscience from basic,
sensory-motor control to higher cognitive functions. One of the most perplexing
enigmas related to internal models is to understand the neural mechanisms that
enable animals to learn large-dimensional problems with so few trials.
Consciousness and metacognition -- the ability to monitor one's own thoughts,
may be part of the solution to this enigma. Based on literature reviews of the
past 20 years, here we propose a computational neuroscience model of
metacognition. The model comprises a modular hierarchical
reinforcement-learning architecture of parallel and layered, generative-inverse
model pairs. In the prefrontal cortex, a distributed executive network called
the "cognitive reality monitoring network" (CRMN) orchestrates conscious
involvement of generative-inverse model pairs in perception and action. Based
on mismatches between computations by generative and inverse models, as well as
reward prediction errors, CRMN computes a "responsibility signal" that gates
selection and learning of pairs in perception, action, and reinforcement
learning. A high responsibility signal is given to the pairs that best capture
the external world, that are competent in movements (small mismatch), and that
are capable of reinforcement learning (small reward prediction error). CRMN
selects pairs with higher responsibility signals as objects of metacognition,
and consciousness is determined by the entropy of responsibility signals across
all pairs.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts [28.340344705437758]
We implement a comprehensive visual decision-making model that spans from visual input to behavioral output.
Our model aligns closely with human behavior and reflects neural activities in primates.
A neuroimaging-informed fine-tuning approach was introduced and applied to the model, leading to performance improvements.
arXiv Detail & Related papers (2024-09-04T02:38:52Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Control of synaptic plasticity via the fusion of reinforcement learning
and unsupervised learning in neural networks [0.0]
In cognitive neuroscience, it is widely accepted that synaptic plasticity plays an essential role in our amazing learning capability.
With this inspiration, a new learning rule is proposed via the fusion of reinforcement learning and unsupervised learning.
In the proposed computational model, the nonlinear optimal control theory is used to resemble the error feedback loop systems.
arXiv Detail & Related papers (2023-03-26T12:18:03Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Explanatory models in neuroscience: Part 2 -- constraint-based
intelligibility [8.477619837043214]
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how models explain.
In biological systems, many of these dependencies are naturally "top-down"
We show how the optimization techniques used to construct NN models capture some key aspects of these dependencies.
arXiv Detail & Related papers (2021-04-03T22:14:01Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.