Control of synaptic plasticity in neural networks
- URL: http://arxiv.org/abs/2303.07273v1
- Date: Fri, 10 Mar 2023 13:36:31 GMT
- Title: Control of synaptic plasticity in neural networks
- Authors: Mohammad Modiri
- Abstract summary: The brain is a nonlinear and highly Recurrent Neural Network (RNN)
The proposed framework involves a new NN-based actor-critic method which is used to simulate the error feedback loop systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The brain is a nonlinear and highly Recurrent Neural Network (RNN). This RNN
is surprisingly plastic and supports our astonishing ability to learn and
execute complex tasks. However, learning is incredibly complicated due to the
brain's nonlinear nature and the obscurity of mechanisms for determining the
contribution of each synapse to the output error. This issue is known as the
Credit Assignment Problem (CAP) and is a fundamental challenge in neuroscience
and Artificial Intelligence (AI). Nevertheless, in the current understanding of
cognitive neuroscience, it is widely accepted that a feedback loop systems play
an essential role in synaptic plasticity. With this as inspiration, we propose
a computational model by combining Neural Networks (NN) and nonlinear optimal
control theory. The proposed framework involves a new NN-based actor-critic
method which is used to simulate the error feedback loop systems and
projections on the NN's synaptic plasticity so as to ensure that the output
error is minimized.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Sphere Neural-Networks for Rational Reasoning [24.591858077975548]
We present a novel extension by generalising computational building blocks from vectors to spheres.
We propose Sphere Neural Networks (SphNNs) for human-like reasoning through model construction and inspection.
SphNNs can evolve into various types of reasoning, such as rationality-temporal reasoning, logical reasoning with negation and disjunction, event reasoning, neuro-symbolic unification, and humour understanding.
arXiv Detail & Related papers (2024-03-22T15:44:59Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Exploiting Noise as a Resource for Computation and Learning in Spiking
Neural Networks [32.0086664373154]
This study introduces the noisy spiking neural network (NSNN) and the noise-driven learning rule (NDL)
NSNN provides a theoretical framework that yields scalable, flexible, and reliable computation.
arXiv Detail & Related papers (2023-05-25T13:21:26Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Control of synaptic plasticity via the fusion of reinforcement learning
and unsupervised learning in neural networks [0.0]
In cognitive neuroscience, it is widely accepted that synaptic plasticity plays an essential role in our amazing learning capability.
With this inspiration, a new learning rule is proposed via the fusion of reinforcement learning and unsupervised learning.
In the proposed computational model, the nonlinear optimal control theory is used to resemble the error feedback loop systems.
arXiv Detail & Related papers (2023-03-26T12:18:03Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.