Multi-Sample Online Learning for Spiking Neural Networks based on
Generalized Expectation Maximization
- URL: http://arxiv.org/abs/2102.03280v1
- Date: Fri, 5 Feb 2021 16:39:42 GMT
- Title: Multi-Sample Online Learning for Spiking Neural Networks based on
Generalized Expectation Maximization
- Authors: Hyeryung Jang and Osvaldo Simeone
- Abstract summary: Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains by processing through binary neural dynamic activations.
This paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights.
The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient.
- Score: 42.125394498649015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) offer a novel computational paradigm that
captures some of the efficiency of biological brains by processing through
binary neural dynamic activations. Probabilistic SNN models are typically
trained to maximize the likelihood of the desired outputs by using unbiased
estimates of the log-likelihood gradients. While prior work used single-sample
estimators obtained from a single run of the network, this paper proposes to
leverage multiple compartments that sample independent spiking signals while
sharing synaptic weights. The key idea is to use these signals to obtain more
accurate statistical estimates of the log-likelihood training criterion, as
well as of its gradient. The approach is based on generalized
expectation-maximization (GEM), which optimizes a tighter approximation of the
log-likelihood using importance sampling. The derived online learning algorithm
implements a three-factor rule with global per-compartment learning signals.
Experimental results on a classification task on the neuromorphic MNIST-DVS
data set demonstrate significant improvements in terms of log-likelihood,
accuracy, and calibration when increasing the number of compartments used for
training and inference.
Related papers
- YOSO: You-Only-Sample-Once via Compressed Sensing for Graph Neural Network Training [9.02251811867533]
YOSO (You-Only-Sample-Once) is an algorithm designed to achieve efficient training while preserving prediction accuracy.
YOSO not only avoids costly computations in traditional compressed sensing (CS) methods, such as orthonormal basis calculations, but also ensures high-probability accuracy retention.
arXiv Detail & Related papers (2024-11-08T16:47:51Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Censored Quantile Regression Neural Networks [24.118509578363593]
This paper considers doing quantile regression on censored data using neural networks (NNs)
We show how an algorithm popular in linear models can be applied to NNs.
Our major contribution is a novel algorithm that simultaneously optimises a grid of quantiles output by a single NN.
arXiv Detail & Related papers (2022-05-26T17:10:28Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spiking Generative Adversarial Networks With a Neural Network
Discriminator: Local Training, Bayesian Models, and Continual Meta-Learning [31.78005607111787]
Training neural networks to reproduce spiking patterns is a central problem in neuromorphic computing.
This work proposes to train SNNs so as to match spiking signals rather than individual spiking signals.
arXiv Detail & Related papers (2021-11-02T17:20:54Z) - PAC-Bayesian Learning of Aggregated Binary Activated Neural Networks
with Probabilities over Representations [2.047424180164312]
We study the expectation of a probabilistic neural network as a predictor by itself, focusing on the aggregation of binary activated neural networks with normal distributions over real-valued weights.
We show that the exact computation remains tractable for deep but narrow neural networks, thanks to a dynamic programming approach.
arXiv Detail & Related papers (2021-10-28T14:11:07Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Multi-Sample Online Learning for Probabilistic Spiking Neural Networks [43.8805663900608]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains for inference and learning.
This paper introduces an online learning rule based on generalized expectation-maximization (GEM)
Experimental results on structured output memorization and classification on a standard neuromorphic data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration.
arXiv Detail & Related papers (2020-07-23T10:03:58Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.