SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with
Continual and Unsupervised Learning Capabilities in Dynamic Environments
- URL: http://arxiv.org/abs/2103.00424v1
- Date: Sun, 28 Feb 2021 08:26:23 GMT
- Title: SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with
Continual and Unsupervised Learning Capabilities in Dynamic Environments
- Authors: Rachmad Vidya Wicaksana Putra, Muhammad Shafique
- Abstract summary: Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised and continual learning capabilities because of their biological plausibility.
We propose SpikeDyn, a framework for energy-efficient SNNs with continual and unsupervised learning capabilities in dynamic environments.
- Score: 14.727296040550392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised
and continual learning capabilities because of their biological plausibility,
but their complexity still poses a serious research challenge to enable their
energy-efficient design for resource-constrained scenarios (like embedded
systems, IoT-Edge, etc.). We propose SpikeDyn, a comprehensive framework for
energy-efficient SNNs with continual and unsupervised learning capabilities in
dynamic environments, for both the training and inference phases. It is
achieved through the following multiple diverse mechanisms: 1) reduction of
neuronal operations, by replacing the inhibitory neurons with direct lateral
inhibitions; 2) a memory- and energy-constrained SNN model search algorithm
that employs analytical models to estimate the memory footprint and energy
consumption of different candidate SNN models and selects a Pareto-optimal SNN
model; and 3) a lightweight continual and unsupervised learning algorithm that
employs adaptive learning rates, adaptive membrane threshold potential, weight
decay, and reduction of spurious updates. Our experimental results show that,
for a network with 400 excitatory neurons, our SpikeDyn reduces the energy
consumption on average by 51% for training and by 37% for inference, as
compared to the state-of-the-art. Due to the improved learning algorithm,
SpikeDyn provides on avg. 21% accuracy improvement over the state-of-the-art,
for classifying the most recently learned task, and by 8% on average for the
previously learned tasks.
Related papers
- Enabling energy-Efficient object detection with surrogate gradient
descent in spiking neural networks [0.40054215937601956]
Spiking Neural Networks (SNNs) are a biologically plausible neural network model with significant advantages in both event-driven processing and processing-temporal information.
In this study, we introduce the Current Mean Decoding (CMD) method, which solves the regression problem to facilitate the training of deep SNNs for object detection tasks.
Based on the gradient surrogate and CMD, we propose the SNN-YOLOv3 model for object detection.
arXiv Detail & Related papers (2023-09-07T15:48:00Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - TopSpark: A Timestep Optimization Methodology for Energy-Efficient
Spiking Neural Networks on Autonomous Mobile Agents [14.916996986290902]
Spiking Neural Networks (SNNs) offer low power/energy processing due to sparse computations and efficient online learning.
TopSpark is a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both training and inference.
arXiv Detail & Related papers (2023-03-03T10:20:45Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for
Efficient Unsupervised Continual Learning on Autonomous Agents [14.916996986290902]
We propose lpSpikeCon, a novel methodology to enable low-precision SNN processing for efficient unsupervised continual learning.
Our lpSpikeCon can reduce weight memory of the SNN model by 8x (i.e., by judiciously employing 4-bit weights) for performing online training with unsupervised continual learning.
arXiv Detail & Related papers (2022-05-24T18:08:16Z) - Deep Reinforcement Learning with Spiking Q-learning [51.386945803485084]
spiking neural networks (SNNs) are expected to realize artificial intelligence (AI) with less energy consumption.
It provides a promising energy-efficient way for realistic control tasks by combining SNNs with deep reinforcement learning (RL)
arXiv Detail & Related papers (2022-01-21T16:42:11Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Dynamic Hard Pruning of Neural Networks at the Edge of the Internet [11.605253906375424]
Dynamic Hard Pruning (DynHP) technique incrementally prunes the network during training.
DynHP enables a tunable size reduction of the final neural network and reduces the NN memory occupancy during training.
Freed memory is reused by a emphdynamic batch sizing approach to counterbalance the accuracy degradation caused by the hard pruning strategy.
arXiv Detail & Related papers (2020-11-17T10:23:28Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.