Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks
- URL: http://arxiv.org/abs/2212.01232v2
- Date: Sun, 2 Jun 2024 16:10:45 GMT
- Title: Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks
- Authors: Thomas Nowotny, James P. Turner, James C. Knight,
- Abstract summary: Eventprop is an algorithm for gradient descent on exact gradients in spiking neural networks.
We implement Eventprop in the GPU-enhanced Neural Networks framework.
Networks achieve state-of-the-art performance on Spiking Heidelberg Digits and good accuracy on Spiking Speech Commands.
- Score: 0.1350479308585481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event-based machine learning promises more energy-efficient AI on future neuromorphic hardware. Here, we investigate how the recently discovered Eventprop algorithm for gradient descent on exact gradients in spiking neural networks can be scaled up to challenging keyword recognition benchmarks. We implemented Eventprop in the GPU-enhanced Neural Networks framework and used it for training recurrent spiking neural networks on the Spiking Heidelberg Digits and Spiking Speech Commands datasets. We found that learning depended strongly on the loss function and extended Eventprop to a wider class of loss functions to enable effective training. When combined with the right additional mechanisms from the machine learning toolbox, Eventprop networks achieved state-of-the-art performance on Spiking Heidelberg Digits and good accuracy on Spiking Speech Commands. This work is a significant step towards a low-power neuromorphic alternative to current machine learning paradigms.
Related papers
- Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - An error-propagation spiking neural network compatible with neuromorphic
processors [2.432141667343098]
We present a spike-based learning method that approximates back-propagation using local weight update mechanisms.
We introduce a network architecture that enables synaptic weight update mechanisms to back-propagate error signals.
This work represents a first step towards the design of ultra-low power mixed-signal neuromorphic processing systems.
arXiv Detail & Related papers (2021-04-12T07:21:08Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Training Convolutional Neural Networks With Hebbian Principal Component
Analysis [10.026753669198108]
Hebbian learning can be used for training the lower or the higher layers of a neural network.
We use a nonlinear Hebbian Principal Component Analysis ( HPCA) learning rule, in place of the Hebbian Winner Takes All (HWTA) strategy.
In particular, the HPCA rule is used to train Convolutional Neural Networks in order to extract relevant features from the CIFAR-10 image dataset.
arXiv Detail & Related papers (2020-12-22T18:17:46Z) - Event-Based Backpropagation can compute Exact Gradients for Spiking
Neural Networks [0.0]
Spiking neural networks combine analog computation with event-based communication using discrete spikes.
For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function.
We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance.
arXiv Detail & Related papers (2020-09-17T15:45:00Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - A Deep Unsupervised Feature Learning Spiking Neural Network with
Binarized Classification Layers for EMNIST Classification using SpykeFlow [0.0]
unsupervised learning technique of spike timing dependent plasticity (STDP) using binary activations are used to extract features from spiking input data.
The accuracies obtained for the balanced EMNIST data set compare favorably with other approaches.
arXiv Detail & Related papers (2020-02-26T23:47:35Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.