Multi-Sample Online Learning for Probabilistic Spiking Neural Networks
- URL: http://arxiv.org/abs/2007.11894v2
- Date: Tue, 5 Jan 2021 11:44:25 GMT
- Title: Multi-Sample Online Learning for Probabilistic Spiking Neural Networks
- Authors: Hyeryung Jang and Osvaldo Simeone
- Abstract summary: Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains for inference and learning.
This paper introduces an online learning rule based on generalized expectation-maximization (GEM)
Experimental results on structured output memorization and classification on a standard neuromorphic data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration.
- Score: 43.8805663900608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) capture some of the efficiency of biological
brains for inference and learning via the dynamic, online, event-driven
processing of binary time series. Most existing learning algorithms for SNNs
are based on deterministic neuronal models, such as leaky integrate-and-fire,
and rely on heuristic approximations of backpropagation through time that
enforce constraints such as locality. In contrast, probabilistic SNN models can
be trained directly via principled online, local, update rules that have proven
to be particularly effective for resource-constrained systems. This paper
investigates another advantage of probabilistic SNNs, namely their capacity to
generate independent outputs when queried over the same input. It is shown that
the multiple generated output samples can be used during inference to robustify
decisions and to quantify uncertainty -- a feature that deterministic SNN
models cannot provide. Furthermore, they can be leveraged for training in order
to obtain more accurate statistical estimates of the log-loss training
criterion, as well as of its gradient. Specifically, this paper introduces an
online learning rule based on generalized expectation-maximization (GEM) that
follows a three-factor form with global learning signals and is referred to as
GEM-SNN. Experimental results on structured output memorization and
classification on a standard neuromorphic data set demonstrate significant
improvements in terms of log-likelihood, accuracy, and calibration when
increasing the number of samples used for inference and training.
Related papers
- Probabilistic Neural Networks (PNNs) for Modeling Aleatoric Uncertainty
in Scientific Machine Learning [2.348041867134616]
This paper investigates the use of probabilistic neural networks (PNNs) to model aleatoric uncertainty.
PNNs generate probability distributions for the target variable, allowing the determination of both predicted means and intervals in regression scenarios.
In a real-world scientific machine learning context, PNNs yield remarkably accurate output mean estimates with R-squared scores approaching 0.97, and their predicted intervals exhibit a high correlation coefficient of nearly 0.80.
arXiv Detail & Related papers (2024-02-21T17:15:47Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spiking Generative Adversarial Networks With a Neural Network
Discriminator: Local Training, Bayesian Models, and Continual Meta-Learning [31.78005607111787]
Training neural networks to reproduce spiking patterns is a central problem in neuromorphic computing.
This work proposes to train SNNs so as to match spiking signals rather than individual spiking signals.
arXiv Detail & Related papers (2021-11-02T17:20:54Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Multi-Sample Online Learning for Spiking Neural Networks based on
Generalized Expectation Maximization [42.125394498649015]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains by processing through binary neural dynamic activations.
This paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights.
The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient.
arXiv Detail & Related papers (2021-02-05T16:39:42Z) - Spiking Neural Networks -- Part II: Detecting Spatio-Temporal Patterns [38.518936229794214]
Spiking Neural Networks (SNNs) have the unique ability to detect information in encoded-temporal signals.
We review models and training algorithms for the dominant approach that considers SNNs as a Recurrent Neural Network (RNN)
We describe an alternative approach that relies on probabilistic models for spiking neurons, allowing the derivation of local learning rules via gradient estimates.
arXiv Detail & Related papers (2020-10-27T11:47:42Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - VOWEL: A Local Online Learning Rule for Recurrent Networks of
Probabilistic Spiking Winner-Take-All Circuits [38.518936229794214]
WTA-SNNs can detect information in-valued multi-valued events.
Existing schemes for training WTA-SNNs are limited to rate-encoding solutions.
We develop a variational online local training rule for WTA-SNNs, referred to as VOWEL.
arXiv Detail & Related papers (2020-04-20T16:21:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.