Optimizing the Consumption of Spiking Neural Networks with Activity
Regularization
- URL: http://arxiv.org/abs/2204.01460v1
- Date: Mon, 4 Apr 2022 13:19:47 GMT
- Title: Optimizing the Consumption of Spiking Neural Networks with Activity
Regularization
- Authors: Simon Narduzzi, Siavash A. Bigdeli, Shih-Chii Liu, L. Andrea Dunbar
- Abstract summary: Spiking Neural Networks (SNNs) are an example of bio-inspired techniques that can further save energy by using binary activations, and avoid consuming energy when not spiking.
In this work, we look into different techniques to enforce sparsity on the neural network activation maps and compare the effect of different training regularizers on the efficiency of the optimized DNNs and SNNs.
- Score: 15.317534913990633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reducing energy consumption is a critical point for neural network models
running on edge devices. In this regard, reducing the number of
multiply-accumulate (MAC) operations of Deep Neural Networks (DNNs) running on
edge hardware accelerators will reduce the energy consumption during inference.
Spiking Neural Networks (SNNs) are an example of bio-inspired techniques that
can further save energy by using binary activations, and avoid consuming energy
when not spiking. The networks can be configured for equivalent accuracy on a
task through DNN-to-SNN conversion frameworks but their conversion is based on
rate coding therefore the synaptic operations can be high. In this work, we
look into different techniques to enforce sparsity on the neural network
activation maps and compare the effect of different training regularizers on
the efficiency of the optimized DNNs and SNNs.
Related papers
- Exploiting Heterogeneity in Timescales for Sparse Recurrent Spiking Neural Networks for Energy-Efficient Edge Computing [16.60622265961373]
Spiking Neural Networks (SNNs) represent the forefront of neuromorphic computing.
This paper weaves together three groundbreaking studies that revolutionize SNN performance.
arXiv Detail & Related papers (2024-07-08T23:33:12Z) - On Reducing Activity with Distillation and Regularization for Energy Efficient Spiking Neural Networks [0.19999259391104385]
Interest in spiking neural networks (SNNs) has been growing steadily, promising an energy-efficient alternative to formal neural networks (FNNs)
We propose to leverage Knowledge Distillation (KD) for SNNs training with surrogate gradient descent in order to optimize the trade-off between performance and spiking activity.
arXiv Detail & Related papers (2024-06-26T13:51:57Z) - Energy-efficient Spiking Neural Network Equalization for IM/DD Systems
with Optimized Neural Encoding [53.909333359654276]
We propose an energy-efficient equalizer for IM/DD systems based on spiking neural networks.
We optimize a neural spike encoding that boosts the equalizer's performance while decreasing energy consumption.
arXiv Detail & Related papers (2023-12-20T10:45:24Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - A Faster Approach to Spiking Deep Convolutional Neural Networks [0.0]
Spiking neural networks (SNNs) have closer dynamics to the brain than current deep neural networks.
We propose a network structure based on previous work to improve network runtime and accuracy.
arXiv Detail & Related papers (2022-10-31T16:13:15Z) - Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware [0.11744028458220425]
Several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs)
Spiked neural networks (SNNs) which operate on discrete time-series data have been shown to achieve substantial power reductions when deployed on specialized neuromorphic event-based/asynchronous hardware.
In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware.
arXiv Detail & Related papers (2022-10-10T20:27:19Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - ActNN: Reducing Training Memory Footprint via 2-Bit Activation
Compressed Training [68.63354877166756]
ActNN is a memory-efficient training framework that stores randomly quantized activations for back propagation.
ActNN reduces the memory footprint of the activation by 12x, and it enables training with a 6.6x to 14x larger batch size.
arXiv Detail & Related papers (2021-04-29T05:50:54Z) - ShiftAddNet: A Hardware-Inspired Deep Network [87.18216601210763]
ShiftAddNet is an energy-efficient multiplication-less deep neural network.
It leads to both energy-efficient inference and training, without compromising expressive capacity.
ShiftAddNet aggressively reduces over 80% hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies.
arXiv Detail & Related papers (2020-10-24T05:09:14Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Effective and Efficient Computation with Multiple-timescale Spiking
Recurrent Neural Networks [0.9790524827475205]
We show how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance.
We calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks.
arXiv Detail & Related papers (2020-05-24T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.