A Spiking Neuron Synaptic Plasticity Model Optimized for Unsupervised
Learning
- URL: http://arxiv.org/abs/2111.06768v1
- Date: Fri, 12 Nov 2021 15:26:52 GMT
- Title: A Spiking Neuron Synaptic Plasticity Model Optimized for Unsupervised
Learning
- Authors: Mikhail Kiselev
- Abstract summary: Spiking neural networks (SNN) are considered as a perspective basis for performing all kinds of learning tasks - unsupervised, supervised and reinforcement learning.
Learning in SNN is implemented through synaptic plasticity - the rules which determine dynamics of synaptic weights depending usually on activity of the pre- and post-synaptic neurons.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Spiking neural networks (SNN) are considered as a perspective basis for
performing all kinds of learning tasks - unsupervised, supervised and
reinforcement learning. Learning in SNN is implemented through synaptic
plasticity - the rules which determine dynamics of synaptic weights depending
usually on activity of the pre- and post-synaptic neurons. Diversity of various
learning regimes assumes that different forms of synaptic plasticity may be
most efficient for, for example, unsupervised and supervised learning, as it is
observed in living neurons demonstrating many kinds of deviations from the
basic spike timing dependent plasticity (STDP) model. In the present paper, we
formulate specific requirements to plasticity rules imposed by unsupervised
learning problems and construct a novel plasticity model generalizing STDP and
satisfying these requirements. This plasticity model serves as main logical
component of the novel supervised learning algorithm called SCoBUL (Spike
Correlation Based Unsupervised Learning) proposed in this work. We also present
the results of computer simulation experiments confirming efficiency of these
synaptic plasticity rules and the algorithm SCoBUL.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Unsupervised Spiking Neural Network Model of Prefrontal Cortex to study
Task Switching with Synaptic deficiency [0.0]
We build a computational model of Prefrontal Cortex (PFC) using Spiking Neural Networks (SNN)
In this study, we use SNN's having parameters close to biologically plausible values and train the model using unsupervised Spike Timing Dependent Plasticity (STDP) learning rule.
arXiv Detail & Related papers (2023-05-23T05:59:54Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spike-based local synaptic plasticity: A survey of computational models
and neuromorphic circuits [1.8464222520424338]
We review historical, bottom-up, and top-down approaches to modeling synaptic plasticity.
We identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules.
arXiv Detail & Related papers (2022-09-30T15:35:04Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - SpikePropamine: Differentiable Plasticity in Spiking Neural Networks [0.0]
We introduce a framework for learning the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in Spiking Neural Networks (SNNs)
We show that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks.
These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task.
arXiv Detail & Related papers (2021-06-04T19:29:07Z) - Unveiling the role of plasticity rules in reservoir computing [0.0]
Reservoir Computing (RC) is an appealing approach in Machine Learning.
We analyze the role that plasticity rules play on the changes that lead to a better performance of RC.
arXiv Detail & Related papers (2021-01-14T19:55:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.