Enhancing Fault Resilience of QNNs by Selective Neuron Splitting
- URL: http://arxiv.org/abs/2306.09973v1
- Date: Fri, 16 Jun 2023 17:11:55 GMT
- Title: Enhancing Fault Resilience of QNNs by Selective Neuron Splitting
- Authors: Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud
Daneshtalab, and Maksim Jenihhin
- Abstract summary: Quantized Neural Networks (QNNs) have emerged to tackle the complexity of Deep Neural Networks (DNNs)
In this paper, a recent analytical resilience assessment method is adapted for QNNs to identify critical neurons based on a Neuron Vulnerability Factor (NVF)
A novel method for splitting the critical neurons is proposed that enables the design of a Lightweight Correction Unit (LCU) in the accelerator without redesigning its computational part.
- Score: 1.1091582432763736
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The superior performance of Deep Neural Networks (DNNs) has led to their
application in various aspects of human life. Safety-critical applications are
no exception and impose rigorous reliability requirements on DNNs. Quantized
Neural Networks (QNNs) have emerged to tackle the complexity of DNN
accelerators, however, they are more prone to reliability issues.
In this paper, a recent analytical resilience assessment method is adapted
for QNNs to identify critical neurons based on a Neuron Vulnerability Factor
(NVF). Thereafter, a novel method for splitting the critical neurons is
proposed that enables the design of a Lightweight Correction Unit (LCU) in the
accelerator without redesigning its computational part.
The method is validated by experiments on different QNNs and datasets. The
results demonstrate that the proposed method for correcting the faults has a
twice smaller overhead than a selective Triple Modular Redundancy (TMR) while
achieving a similar level of fault resiliency.
Related papers
- Mitigating multiple single-event upsets during deep neural network inference using fault-aware training [0.0]
Deep neural networks (DNNs) are increasingly used in safety-critical applications.
This study analyses the impact of multiple single-bit single-event upsets in DNNs by performing fault injection at the level of a model.
A fault aware training (FAT) methodology is proposed that improves the DNNs' robustness to faults without any modification to the hardware.
arXiv Detail & Related papers (2025-02-13T14:43:22Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Q-SNNs: Quantized Spiking Neural Networks [12.719590949933105]
Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an event-driven manner.
We introduce a lightweight and hardware-friendly Quantized SNN that applies quantization to both synaptic weights and membrane potentials.
We present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory.
arXiv Detail & Related papers (2024-06-19T16:23:26Z) - Optimal ANN-SNN Conversion with Group Neurons [39.14228133571838]
Spiking Neural Networks (SNNs) have emerged as a promising third generation of neural networks.
The lack of effective learning algorithms remains a challenge for SNNs.
We introduce a novel type of neuron called Group Neurons (GNs)
arXiv Detail & Related papers (2024-02-29T11:41:12Z) - Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - BackEISNN: A Deep Spiking Neural Network with Adaptive Self-Feedback and
Balanced Excitatory-Inhibitory Neurons [8.956708722109415]
Spiking neural networks (SNNs) transmit information through discrete spikes, which performs well in processing spatial-temporal information.
We propose a deep spiking neural network with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN)
For the MNIST, FashionMNIST, and N-MNIST datasets, our model has achieved state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T08:38:31Z) - Pruning of Deep Spiking Neural Networks through Gradient Rewiring [41.64961999525415]
Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips.
Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs.
We propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retrain.
arXiv Detail & Related papers (2021-05-11T10:05:53Z) - Skip-Connected Self-Recurrent Spiking Neural Networks with Joint
Intrinsic Parameter and Synaptic Weight Training [14.992756670960008]
We propose a new type of RSNN called Skip-Connected Self-Recurrent SNNs (ScSr-SNNs)
ScSr-SNNs can boost performance by up to 2.55% compared with other types of RSNNs trained by state-of-the-art BP methods.
arXiv Detail & Related papers (2020-10-23T22:27:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.