Control of synaptic plasticity via the fusion of reinforcement learning
and unsupervised learning in neural networks
- URL: http://arxiv.org/abs/2303.14705v1
- Date: Sun, 26 Mar 2023 12:18:03 GMT
- Title: Control of synaptic plasticity via the fusion of reinforcement learning
and unsupervised learning in neural networks
- Authors: Mohammad Modiri
- Abstract summary: In cognitive neuroscience, it is widely accepted that synaptic plasticity plays an essential role in our amazing learning capability.
With this inspiration, a new learning rule is proposed via the fusion of reinforcement learning and unsupervised learning.
In the proposed computational model, the nonlinear optimal control theory is used to resemble the error feedback loop systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The brain can learn to execute a wide variety of tasks quickly and
efficiently. Nevertheless, most of the mechanisms that enable us to learn are
unclear or incredibly complicated. Recently, considerable efforts have been
made in neuroscience and artificial intelligence to understand and model the
structure and mechanisms behind the amazing learning capability of the brain.
However, in the current understanding of cognitive neuroscience, it is widely
accepted that synaptic plasticity plays an essential role in our amazing
learning capability. This mechanism is also known as the Credit Assignment
Problem (CAP) and is a fundamental challenge in neuroscience and Artificial
Intelligence (AI). The observations of neuroscientists clearly confirm the role
of two important mechanisms including the error feedback system and
unsupervised learning in synaptic plasticity. With this inspiration, a new
learning rule is proposed via the fusion of reinforcement learning (RL) and
unsupervised learning (UL). In the proposed computational model, the nonlinear
optimal control theory is used to resemble the error feedback loop systems and
project the output error to neurons membrane potential (neurons state), and an
unsupervised learning rule based on neurons membrane potential or neurons
activity are utilized to simulate synaptic plasticity dynamics to ensure that
the output error is minimized.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Control of synaptic plasticity in neural networks [0.0]
The brain is a nonlinear and highly Recurrent Neural Network (RNN)
The proposed framework involves a new NN-based actor-critic method which is used to simulate the error feedback loop systems.
arXiv Detail & Related papers (2023-03-10T13:36:31Z) - NeuroCERIL: Robotic Imitation Learning via Hierarchical Cause-Effect
Reasoning in Programmable Attractor Neural Networks [2.0646127669654826]
We present NeuroCERIL, a brain-inspired neurocognitive architecture that uses a novel hypothetico-deductive reasoning procedure.
We show that NeuroCERIL can learn various procedural skills in a simulated robotic imitation learning domain.
We conclude that NeuroCERIL is a viable neural model of human-like imitation learning.
arXiv Detail & Related papers (2022-11-11T19:56:11Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - From Biological Synapses to Intelligent Robots [0.0]
Hebbian synaptic learning is discussed as a functionally relevant model for machine learning and intelligence.
The potential for adaptive learning and control without supervision is brought forward.
The insights collected here point toward the Hebbian model as a choice solution for intelligent robotics and sensor systems.
arXiv Detail & Related papers (2022-02-25T12:39:22Z) - Learning by Active Forgetting for Neural Networks [36.47528616276579]
Remembering and forgetting mechanisms are two sides of the same coin in a human learning-memory system.
Modern machine learning systems have been working to endow machine with lifelong learning capability through better remembering.
This paper presents a learning model by active forgetting mechanism with artificial neural networks.
arXiv Detail & Related papers (2021-11-21T14:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.