FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks
- URL: http://arxiv.org/abs/2007.08860v1
- Date: Fri, 17 Jul 2020 09:40:26 GMT
- Title: FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks
- Authors: Rachmad Vidya Wicaksana Putra, Muhammad Shafique
- Abstract summary: Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
- Score: 14.916996986290902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) are gaining interest due to their event-driven
processing which potentially consumes low power/energy computations in hardware
platforms, while offering unsupervised learning capability due to the
spike-timing-dependent plasticity (STDP) rule. However, state-of-the-art SNNs
require a large memory footprint to achieve high accuracy, thereby making them
difficult to be deployed on embedded systems, for instance on battery-powered
mobile devices and IoT Edge nodes. Towards this, we propose FSpiNN, an
optimization framework for obtaining memory- and energy-efficient SNNs for
training and inference processing, with unsupervised learning capability while
maintaining accuracy. It is achieved by (1) reducing the computational
requirements of neuronal and STDP operations, (2) improving the accuracy of
STDP-based learning, (3) compressing the SNN through a fixed-point
quantization, and (4) incorporating the memory and energy requirements in the
optimization process. FSpiNN reduces the computational requirements by reducing
the number of neuronal operations, the STDP-based synaptic weight updates, and
the STDP complexity. To improve the accuracy of learning, FSpiNN employs
timestep-based synaptic weight updates, and adaptively determines the STDP
potentiation factor and the effective inhibition strength. The experimental
results show that, as compared to the state-of-the-art work, FSpiNN achieves
7.5x memory saving, and improves the energy-efficiency by 3.5x on average for
training and by 1.8x on average for inference, across MNIST and Fashion MNIST
datasets, with no accuracy loss for a network with 4900 excitatory neurons,
thereby enabling energy-efficient SNNs for edge devices/embedded systems.
Related papers
- TopSpark: A Timestep Optimization Methodology for Energy-Efficient
Spiking Neural Networks on Autonomous Mobile Agents [14.916996986290902]
Spiking Neural Networks (SNNs) offer low power/energy processing due to sparse computations and efficient online learning.
TopSpark is a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both training and inference.
arXiv Detail & Related papers (2023-03-03T10:20:45Z) - The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks [0.368986335765876]
quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
arXiv Detail & Related papers (2023-02-08T16:25:20Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Adaptive-SpikeNet: Event-based Optical Flow Estimation using Spiking
Neural Networks with Learnable Neuronal Dynamics [6.309365332210523]
Spiking Neural Networks (SNNs) with their neuro-inspired event-driven processing can efficiently handle asynchronous data.
We propose an adaptive fully-spiking framework with learnable neuronal dynamics to alleviate the spike vanishing problem.
Our experiments on datasets show an average reduction of 13% in average endpoint error (AEE) compared to state-of-the-art ANNs.
arXiv Detail & Related papers (2022-09-21T21:17:56Z) - tinySNN: Towards Memory- and Energy-Efficient Spiking Neural Networks [14.916996986290902]
Spiking Neural Network (SNN) models are typically favorable as they can offer higher accuracy.
However, employing such models on the resource- and energy-constrained embedded platforms is inefficient.
We present a tinySNN framework that optimize the memory and energy requirements of SNN processing.
arXiv Detail & Related papers (2022-06-17T09:40:40Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with
Continual and Unsupervised Learning Capabilities in Dynamic Environments [14.727296040550392]
Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised and continual learning capabilities because of their biological plausibility.
We propose SpikeDyn, a framework for energy-efficient SNNs with continual and unsupervised learning capabilities in dynamic environments.
arXiv Detail & Related papers (2021-02-28T08:26:23Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.