Backpropagated Neighborhood Aggregation for Accurate Training of Spiking
Neural Networks
- URL: http://arxiv.org/abs/2107.06861v1
- Date: Tue, 22 Jun 2021 16:42:48 GMT
- Title: Backpropagated Neighborhood Aggregation for Accurate Training of Spiking
Neural Networks
- Authors: Yukun Yang, Wenrui Zhang, Peng Li
- Abstract summary: We propose a novel BP-like method, called neighborhood aggregation (NA), which computes accurate error gradients guiding weight updates.
NA achieves this goal by aggregating finite differences of the loss over perturbed membrane potential waveforms in the neighborhood of the present membrane potential of each neuron.
Our experiments show that the proposed NA algorithm delivers the state-of-the-art performance for SNN training on several datasets.
- Score: 14.630838296680025
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: While backpropagation (BP) has been applied to spiking neural networks (SNNs)
achieving encouraging results, a key challenge involved is to backpropagate a
continuous-valued loss over layers of spiking neurons exhibiting discontinuous
all-or-none firing activities. Existing methods deal with this difficulty by
introducing compromises that come with their own limitations, leading to
potential performance degradation. We propose a novel BP-like method, called
neighborhood aggregation (NA), which computes accurate error gradients guiding
weight updates that may lead to discontinuous modifications of firing
activities. NA achieves this goal by aggregating finite differences of the loss
over multiple perturbed membrane potential waveforms in the neighborhood of the
present membrane potential of each neuron while utilizing a new membrane
potential distance function. Our experiments show that the proposed NA
algorithm delivers the state-of-the-art performance for SNN training on several
datasets.
Related papers
- Online Pseudo-Zeroth-Order Training of Neuromorphic Spiking Neural Networks [69.2642802272367]
Brain-inspired neuromorphic computing with spiking neural networks (SNNs) is a promising energy-efficient computational approach.
Most recent methods leverage spatial and temporal backpropagation (BP), not adhering to neuromorphic properties.
We propose a novel method, online pseudo-zeroth-order (OPZO) training.
arXiv Detail & Related papers (2024-07-17T12:09:00Z) - Scaling SNNs Trained Using Equilibrium Propagation to Convolutional Architectures [2.2146860305758485]
Equilibrium Propagation (EP) is a biologically plausible local learning algorithm initially developed for convergent recurrent neural networks (RNNs)
EP is a powerful candidate for training Spiking Neural Networks (SNNs), which are commonly trained by BPTT.
We provide a formulation for training convolutional spiking convergent RNNs using EP, bridging the gap between spiking and non-spiking convergent RNNs.
arXiv Detail & Related papers (2024-05-04T03:06:14Z) - Effective Learning with Node Perturbation in Multi-Layer Neural Networks [2.1168858935852013]
node perturbation (NP) proposes learning by the injection of noise into network activations.
NP is highly data inefficient and unstable due to its unguided noise-based search process.
We find that a closer alignment with directional derivatives together with input decorrelation at every layer strongly enhances performance of NP learning.
arXiv Detail & Related papers (2023-10-02T08:12:51Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Multi-Level Firing with Spiking DS-ResNet: Enabling Better and Deeper
Directly-Trained Spiking Neural Networks [19.490903216456758]
Spiking neural networks (SNNs) are neural networks with asynchronous discrete and sparse characteristics.
We propose a multi-level firing (MLF) method based on the existing spiking-suppressed residual network (spiking DS-ResNet)
arXiv Detail & Related papers (2022-10-12T16:39:46Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Temporal Spike Sequence Learning via Backpropagation for Deep Spiking
Neural Networks [14.992756670960008]
Spiking neural networks (SNNs) are well suited for computation and implementations on energy-efficient event-driven neuromorphic processors.
We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs.
arXiv Detail & Related papers (2020-02-24T05:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.