Training Spiking Neural Networks Using Lessons From Deep Learning
- URL: http://arxiv.org/abs/2109.12894v6
- Date: Sun, 13 Aug 2023 04:51:16 GMT
- Title: Training Spiking Neural Networks Using Lessons From Deep Learning
- Authors: Jason K. Eshraghian and Max Ward and Emre Neftci and Xinxin Wang and
Gregor Lenz and Girish Dwivedi and Mohammed Bennamoun and Doo Seok Jeong and
Wei D. Lu
- Abstract summary: The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like.
Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here.
A series of companion interactive tutorials complementary to this paper using our Python package, snnTorch, are also made available.
- Score: 28.827506468167652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The brain is the perfect place to look for inspiration to develop more
efficient neural networks. The inner workings of our synapses and neurons
provide a glimpse at what the future of deep learning might look like. This
paper serves as a tutorial and perspective showing how to apply the lessons
learnt from several decades of research in deep learning, gradient descent,
backpropagation and neuroscience to biologically plausible spiking neural
neural networks.
We also explore the delicate interplay between encoding data as spikes and
the learning process; the challenges and solutions of applying gradient-based
learning to spiking neural networks (SNNs); the subtle link between temporal
backpropagation and spike timing dependent plasticity, and how deep learning
might move towards biologically plausible online learning. Some ideas are well
accepted and commonly used amongst the neuromorphic engineering community,
while others are presented or justified for the first time here.
The fields of deep learning and spiking neural networks evolve very rapidly.
We endeavour to treat this document as a 'dynamic' manuscript that will
continue to be updated as the common practices in training SNNs also change.
A series of companion interactive tutorials complementary to this paper using
our Python package, snnTorch, are also made available. See
https://snntorch.readthedocs.io/en/latest/tutorials/index.html .
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Learning to learn online with neuromodulated synaptic plasticity in
spiking neural networks [0.0]
We show that models of neuromodulated synaptic plasticity from neuroscience can be trained to learn through gradient descent.
This framework opens a new path toward developing neuroscience inspired online learning algorithms.
arXiv Detail & Related papers (2022-06-25T00:28:40Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - How and what to learn:The modes of machine learning [7.085027463060304]
We propose a new approach, namely the weight pathway analysis (WPA), to study the mechanism of multilayer neural networks.
WPA shows that a neural network stores and utilizes information in a "holographic" way, that is, the network encodes all training samples in a coherent structure.
It is found that hidden-layer neurons self-organize into different classes in the later stages of the learning process.
arXiv Detail & Related papers (2022-02-28T14:39:06Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Applications of Deep Neural Networks with Keras [0.0]
Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain.
This course will introduce the student to classic neural network structures, Conversa Neural Networks (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Neural Networks (GRU), General Adrial Networks (GAN)
arXiv Detail & Related papers (2020-09-11T22:09:10Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Deep learning approaches for neural decoding: from CNNs to LSTMs and
spikes to fMRI [2.0178765779788495]
Decoding behavior, perception, or cognitive state directly from neural signals has applications in brain-computer interface research.
In the last decade, deep learning has become the state-of-the-art method in many machine learning tasks.
Deep learning has been shown to be a useful tool for improving the accuracy and flexibility of neural decoding across a wide range of tasks.
arXiv Detail & Related papers (2020-05-19T18:10:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.