lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for
Efficient Unsupervised Continual Learning on Autonomous Agents
- URL: http://arxiv.org/abs/2205.12295v1
- Date: Tue, 24 May 2022 18:08:16 GMT
- Title: lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for
Efficient Unsupervised Continual Learning on Autonomous Agents
- Authors: Rachmad Vidya Wicaksana Putra, Muhammad Shafique
- Abstract summary: We propose lpSpikeCon, a novel methodology to enable low-precision SNN processing for efficient unsupervised continual learning.
Our lpSpikeCon can reduce weight memory of the SNN model by 8x (i.e., by judiciously employing 4-bit weights) for performing online training with unsupervised continual learning.
- Score: 14.916996986290902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances have shown that SNN-based systems can efficiently perform
unsupervised continual learning due to their bio-plausible learning rule, e.g.,
Spike-Timing-Dependent Plasticity (STDP). Such learning capabilities are
especially beneficial for use cases like autonomous agents (e.g., robots and
UAVs) that need to continuously adapt to dynamically changing
scenarios/environments, where new data gathered directly from the environment
may have novel features that should be learned online. Current state-of-the-art
works employ high-precision weights (i.e., 32 bit) for both training and
inference phases, which pose high memory and energy costs thereby hindering
efficient embedded implementations of such systems for battery-driven mobile
autonomous systems. On the other hand, precision reduction may jeopardize the
quality of unsupervised continual learning due to information loss. Towards
this, we propose lpSpikeCon, a novel methodology to enable low-precision SNN
processing for efficient unsupervised continual learning on
resource-constrained autonomous agents/systems. Our lpSpikeCon methodology
employs the following key steps: (1) analyzing the impacts of training the SNN
model under unsupervised continual learning settings with reduced weight
precision on the inference accuracy; (2) leveraging this study to identify SNN
parameters that have a significant impact on the inference accuracy; and (3)
developing an algorithm for searching the respective SNN parameter values that
improve the quality of unsupervised continual learning. The experimental
results show that our lpSpikeCon can reduce weight memory of the SNN model by
8x (i.e., by judiciously employing 4-bit weights) for performing online
training with unsupervised continual learning and achieve no accuracy loss in
the inference phase, as compared to the baseline model with 32-bit weights
across different network sizes.
Related papers
- A Methodology to Study the Impact of Spiking Neural Network Parameters considering Event-Based Automotive Data [5.59354286094951]
We propose a novel methodology to systematically study and analyze the impact of SNN parameters considering event-based automotive data.
We show that our methodology can improve the SNN models for Autonomous Driving systems than the state-of-the-art.
Our research work provides a set of guidelines for SNN parameter enhancements, thereby enabling the practical developments of SNN-based AD systems.
arXiv Detail & Related papers (2024-04-04T14:48:26Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - An Unsupervised STDP-based Spiking Neural Network Inspired By
Biologically Plausible Learning Rules and Connections [10.188771327458651]
Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly.
We design an adaptive synaptic filter and introduce the adaptive spiking threshold to enrich the representation ability of SNNs.
Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-07-06T14:53:32Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with
Continual and Unsupervised Learning Capabilities in Dynamic Environments [14.727296040550392]
Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised and continual learning capabilities because of their biological plausibility.
We propose SpikeDyn, a framework for energy-efficient SNNs with continual and unsupervised learning capabilities in dynamic environments.
arXiv Detail & Related papers (2021-02-28T08:26:23Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Dynamic Hard Pruning of Neural Networks at the Edge of the Internet [11.605253906375424]
Dynamic Hard Pruning (DynHP) technique incrementally prunes the network during training.
DynHP enables a tunable size reduction of the final neural network and reduces the NN memory occupancy during training.
Freed memory is reused by a emphdynamic batch sizing approach to counterbalance the accuracy degradation caused by the hard pruning strategy.
arXiv Detail & Related papers (2020-11-17T10:23:28Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.