Explicitly Trained Spiking Sparsity in Spiking Neural Networks with
Backpropagation
- URL: http://arxiv.org/abs/2003.01250v1
- Date: Mon, 2 Mar 2020 23:39:18 GMT
- Title: Explicitly Trained Spiking Sparsity in Spiking Neural Networks with
Backpropagation
- Authors: Jason M. Allred, Steven J. Spencer, Gopalakrishnan Srinivasan, Kaushik
Roy
- Abstract summary: Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency resulting from sparse, event-driven computations.
We propose an explicit inclusion of spike counts in the loss function, along with a traditional error loss, to optimize weight parameters for both accuracy and spiking sparsity.
- Score: 7.952659059689134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) are being explored for their potential energy
efficiency resulting from sparse, event-driven computations. Many recent works
have demonstrated effective backpropagation for deep Spiking Neural Networks
(SNNs) by approximating gradients over discontinuous neuron spikes or firing
events. A beneficial side-effect of these surrogate gradient spiking
backpropagation algorithms is that the spikes, which trigger additional
computations, may now themselves be directly considered in the gradient
calculations. We propose an explicit inclusion of spike counts in the loss
function, along with a traditional error loss, causing the backpropagation
learning algorithms to optimize weight parameters for both accuracy and spiking
sparsity. As supported by existing theory of over-parameterized neural
networks, there are many solution states with effectively equivalent accuracy.
As such, appropriate weighting of the two loss goals during training in this
multi-objective optimization process can yield an improvement in spiking
sparsity without a significant loss of accuracy. We additionally explore a
simulated annealing-inspired loss weighting technique to increase the weighting
for sparsity as training time increases. Our preliminary results on the
Cifar-10 dataset show up to 70.1% reduction in spiking activity with
iso-accuracy compared to an equivalent SNN trained only for accuracy and up to
73.3% reduction in spiking activity if allowed a trade-off of 1% reduction in
classification accuracy.
Related papers
- Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - WeightMom: Learning Sparse Networks using Iterative Momentum-based
pruning [0.0]
We propose a weight based pruning approach in which the weights are pruned gradually based on their momentum of the previous iterations.
We evaluate our approach on networks such as AlexNet, VGG16 and ResNet50 with image classification datasets such as CIFAR-10 and CIFAR-100.
arXiv Detail & Related papers (2022-08-11T07:13:59Z) - Efficient Training of Spiking Neural Networks with Temporally-Truncated
Local Backpropagation through Time [1.926678651590519]
Training spiking neural networks (SNNs) has remained challenging due to complex neural dynamics and intrinsic non-differentiability in firing functions.
This work proposes an efficient and direct training algorithm for SNNs that integrates a locally-supervised training method with a temporally-truncated BPTT algorithm.
arXiv Detail & Related papers (2021-12-13T07:44:58Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Event-Based Backpropagation can compute Exact Gradients for Spiking
Neural Networks [0.0]
Spiking neural networks combine analog computation with event-based communication using discrete spikes.
For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function.
We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance.
arXiv Detail & Related papers (2020-09-17T15:45:00Z) - Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing
its Gradient Estimator Bias [65.13042449121411]
In practice, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST.
We show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon.
We apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error.
arXiv Detail & Related papers (2020-06-06T09:36:07Z) - The Break-Even Point on Optimization Trajectories of Deep Neural
Networks [64.7563588124004]
We argue for the existence of the "break-even" point on this trajectory.
We show that using a large learning rate in the initial phase of training reduces the variance of the gradient.
We also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers.
arXiv Detail & Related papers (2020-02-21T22:55:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.