Training Deep Spiking Auto-encoders without Bursting or Dying Neurons
through Regularization
- URL: http://arxiv.org/abs/2109.11045v1
- Date: Wed, 22 Sep 2021 21:27:40 GMT
- Title: Training Deep Spiking Auto-encoders without Bursting or Dying Neurons
through Regularization
- Authors: Justus F. H\"ubotter, Pablo Lanillos, Jakub M. Tomczak
- Abstract summary: Spiking neural networks are a promising approach towards next-generation models of the brain in computational neuroscience.
We apply end-to-end learning with membrane potential-based backpropagation to a spiking convolutional auto-encoder.
We show that applying regularization on membrane potential and spiking output successfully avoids both dead and bursting neurons.
- Score: 9.34612743192798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks are a promising approach towards next-generation
models of the brain in computational neuroscience. Moreover, compared to
classic artificial neural networks, they could serve as an energy-efficient
deployment of AI by enabling fast computation in specialized neuromorphic
hardware. However, training deep spiking neural networks, especially in an
unsupervised manner, is challenging and the performance of a spiking model is
significantly hindered by dead or bursting neurons. Here, we apply end-to-end
learning with membrane potential-based backpropagation to a spiking
convolutional auto-encoder with multiple trainable layers of leaky
integrate-and-fire neurons. We propose bio-inspired regularization methods to
control the spike density in latent representations. In the experiments, we
show that applying regularization on membrane potential and spiking output
successfully avoids both dead and bursting neurons and significantly decreases
the reconstruction error of the spiking auto-encoder. Training regularized
networks on the MNIST dataset yields image reconstruction quality comparable to
non-spiking baseline models (deterministic and variational auto-encoder) and
indicates improvement upon earlier approaches. Importantly, we show that,
unlike the variational auto-encoder, the spiking latent representations display
structure associated with the image class.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Toward Neuromic Computing: Neurons as Autoencoders [0.0]
This paper presents the idea that neural backpropagation is using dendritic processing to enable individual neurons to perform autoencoding.
Using a very simple connection weight search and artificial neural network model, the effects of interleaving autoencoding for each neuron in a hidden layer of a feedforward network are explored.
arXiv Detail & Related papers (2024-03-04T18:58:09Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Surrogate Gradient Spiking Neural Networks as Encoders for Large
Vocabulary Continuous Speech Recognition [91.39701446828144]
We show that spiking neural networks can be trained like standard recurrent neural networks using the surrogate gradient method.
They have shown promising results on speech command recognition tasks.
In contrast to their recurrent non-spiking counterparts, they show robustness to exploding gradient problems without the need to use gates.
arXiv Detail & Related papers (2022-12-01T12:36:26Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Effective and Efficient Computation with Multiple-timescale Spiking
Recurrent Neural Networks [0.9790524827475205]
We show how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance.
We calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks.
arXiv Detail & Related papers (2020-05-24T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.