ReSpawn: Energy-Efficient Fault-Tolerance for Spiking Neural Networks
considering Unreliable Memories
- URL: http://arxiv.org/abs/2108.10271v1
- Date: Mon, 23 Aug 2021 16:17:33 GMT
- Title: ReSpawn: Energy-Efficient Fault-Tolerance for Spiking Neural Networks
considering Unreliable Memories
- Authors: Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad
Shafique
- Abstract summary: Spiking neural networks (SNNs) have shown a potential for having low energy with unsupervised learning capabilities.
They may suffer from accuracy degradation if their processing is performed under the presence of hardware-induced faults in memories.
We propose ReSpawn, a novel framework for mitigating the negative impacts of faults in both the off-chip and on-chip memories.
- Score: 14.933137030206286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) have shown a potential for having low energy
with unsupervised learning capabilities due to their biologically-inspired
computation. However, they may suffer from accuracy degradation if their
processing is performed under the presence of hardware-induced faults in
memories, which can come from manufacturing defects or voltage-induced
approximation errors. Since recent works still focus on the fault-modeling and
random fault injection in SNNs, the impact of memory faults in SNN hardware
architectures on accuracy and the respective fault-mitigation techniques are
not thoroughly explored. Toward this, we propose ReSpawn, a novel framework for
mitigating the negative impacts of faults in both the off-chip and on-chip
memories for resilient and energy-efficient SNNs. The key mechanisms of ReSpawn
are: (1) analyzing the fault tolerance of SNNs; and (2) improving the SNN fault
tolerance through (a) fault-aware mapping (FAM) in memories, and (b)
fault-aware training-and-mapping (FATM). If the training dataset is not fully
available, FAM is employed through efficient bit-shuffling techniques that
place the significant bits on the non-faulty memory cells and the insignificant
bits on the faulty ones, while minimizing the memory access energy. Meanwhile,
if the training dataset is fully available, FATM is employed by considering the
faulty memory cells in the data mapping and training processes. The
experimental results show that, compared to the baseline SNN without
fault-mitigation techniques, ReSpawn with a fault-aware mapping scheme improves
the accuracy by up to 70% for a network with 900 neurons without retraining.
Related papers
- NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural
Network Inference in Low-Voltage Regimes [52.51014498593644]
Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains a notable issue.
We introduce NeuralFuse, a novel add-on module that addresses the accuracy-energy tradeoff in low-voltage regimes.
At a 1% bit error rate, NeuralFuse can reduce memory access energy by up to 24% while recovering accuracy by up to 57%.
arXiv Detail & Related papers (2023-06-29T11:38:22Z) - RescueSNN: Enabling Reliable Executions on Spiking Neural Network
Accelerators under Permanent Faults [15.115813664357436]
RescueSNN is a novel methodology to mitigate permanent faults in the compute engine of SNN chips.
RescueSNN improves accuracy by up to 80% while maintaining the throughput reduction below 25% in high fault rate.
arXiv Detail & Related papers (2023-04-08T15:24:57Z) - CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for
Emerging Memories-Based Deep Neural Networks [7.566423455230909]
Deep Neural Networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications.
This paper proposes CRAFT, i.e., Criticality-Aware Fault-Tolerance Enhancement Techniques to enhance the reliability of NVM-based DNNs.
arXiv Detail & Related papers (2023-02-08T03:39:11Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Distribution-sensitive Information Retention for Accurate Binary Neural
Network [49.971345958676196]
We present a novel Distribution-sensitive Information Retention Network (DIR-Net) to retain the information of the forward activations and backward gradients.
Our DIR-Net consistently outperforms the SOTA binarization approaches under mainstream and compact architectures.
We conduct our DIR-Net on real-world resource-limited devices which achieves 11.1 times storage saving and 5.4 times speedup.
arXiv Detail & Related papers (2021-09-25T10:59:39Z) - FAT: Training Neural Networks for Reliable Inference Under Hardware
Faults [3.191587417198382]
We present a novel methodology called fault-aware training (FAT), which includes error modeling during neural network (NN) training, to make QNNs resilient to specific fault models on the device.
FAT has been validated for numerous classification tasks including CIFAR10, GTSRB, SVHN and ImageNet.
arXiv Detail & Related papers (2020-11-11T16:09:39Z) - Towards Explainable Bit Error Tolerance of Resistive RAM-Based Binarized
Neural Networks [7.349786872131006]
Non-volatile memory, such as resistive RAM (RRAM), is an emerging energy-efficient storage.
Binary neural networks (BNNs) can tolerate a certain percentage of errors without a loss in accuracy.
The bit error tolerance (BET) in BNNs can be achieved by flipping the weight signs during training.
arXiv Detail & Related papers (2020-02-03T17:38:45Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.