TopSpark: A Timestep Optimization Methodology for Energy-Efficient
Spiking Neural Networks on Autonomous Mobile Agents
- URL: http://arxiv.org/abs/2303.01826v2
- Date: Sat, 29 Jul 2023 02:55:51 GMT
- Title: TopSpark: A Timestep Optimization Methodology for Energy-Efficient
Spiking Neural Networks on Autonomous Mobile Agents
- Authors: Rachmad Vidya Wicaksana Putra, Muhammad Shafique
- Abstract summary: Spiking Neural Networks (SNNs) offer low power/energy processing due to sparse computations and efficient online learning.
TopSpark is a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both training and inference.
- Score: 14.916996986290902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous mobile agents require low-power/energy-efficient machine learning
(ML) algorithms to complete their ML-based tasks while adapting to diverse
environments, as mobile agents are usually powered by batteries. These
requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer
low power/energy processing due to their sparse computations and efficient
online learning with bio-inspired learning mechanisms for adapting to different
environments. Recent works studied that the energy consumption of SNNs can be
optimized by reducing the computation time of each neuron for processing a
sequence of spikes (timestep). However, state-of-the-art techniques rely on
intensive design searches to determine fixed timestep settings for only
inference, thereby hindering the SNNs from achieving further energy efficiency
gains in both training and inference. These techniques also restrict the SNNs
from performing efficient online learning at run time. Toward this, we propose
TopSpark, a novel methodology that leverages adaptive timestep reduction to
enable energy-efficient SNN processing in both training and inference, while
keeping its accuracy close to the accuracy of SNNs without timestep reduction.
The ideas of TopSpark include: analyzing the impact of different timesteps on
the accuracy; identifying neuron parameters that have a significant impact on
accuracy in different timesteps; employing parameter enhancements that make
SNNs effectively perform learning and inference using less spiking activity;
and developing a strategy to trade-off accuracy, latency, and energy to meet
the design requirements. The results show that, TopSpark saves the SNN latency
by 3.9x as well as energy consumption by 3.5x (training) and 3.3x (inference)
on average, across different network sizes, learning rules, and workloads,
while maintaining the accuracy within 2% of SNNs without timestep reduction.
Related papers
- SNN4Agents: A Framework for Developing Energy-Efficient Embodied Spiking Neural Networks for Autonomous Agents [6.110543738208028]
Spiking Neural Networks (SNNs) use spikes from event-based cameras or data conversion pre-processing to perform sparse computations efficiently.
We propose a novel framework called SNN4Agents that consists of a set of optimization techniques for designing energy-efficient embodied SNNs.
Our framework can maintain high accuracy (i.e., 84.12% accuracy) with 68.75% memory saving, 3.58x speed-up, and 4.03x energy efficiency improvement.
arXiv Detail & Related papers (2024-04-14T19:06:00Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for
Efficient Unsupervised Continual Learning on Autonomous Agents [14.916996986290902]
We propose lpSpikeCon, a novel methodology to enable low-precision SNN processing for efficient unsupervised continual learning.
Our lpSpikeCon can reduce weight memory of the SNN model by 8x (i.e., by judiciously employing 4-bit weights) for performing online training with unsupervised continual learning.
arXiv Detail & Related papers (2022-05-24T18:08:16Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with
Continual and Unsupervised Learning Capabilities in Dynamic Environments [14.727296040550392]
Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised and continual learning capabilities because of their biological plausibility.
We propose SpikeDyn, a framework for energy-efficient SNNs with continual and unsupervised learning capabilities in dynamic environments.
arXiv Detail & Related papers (2021-02-28T08:26:23Z) - Dynamic Hard Pruning of Neural Networks at the Edge of the Internet [11.605253906375424]
Dynamic Hard Pruning (DynHP) technique incrementally prunes the network during training.
DynHP enables a tunable size reduction of the final neural network and reduces the NN memory occupancy during training.
Freed memory is reused by a emphdynamic batch sizing approach to counterbalance the accuracy degradation caused by the hard pruning strategy.
arXiv Detail & Related papers (2020-11-17T10:23:28Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.