eFAT: Improving the Effectiveness of Fault-Aware Training for Mitigating
Permanent Faults in DNN Hardware Accelerators
- URL: http://arxiv.org/abs/2304.12949v1
- Date: Thu, 20 Apr 2023 01:35:11 GMT
- Title: eFAT: Improving the Effectiveness of Fault-Aware Training for Mitigating
Permanent Faults in DNN Hardware Accelerators
- Authors: Muhammad Abdullah Hanif, Muhammad Shafique
- Abstract summary: Fault-Aware Training (FAT) has emerged as a highly effective technique for addressing permanent faults in DNN accelerators.
FAT is required to be performed for each faulty chip individually, considering its unique fault map.
We propose concepts of resilience-driven retraining amount selection, and resilience-driven grouping and fusion of multiple fault maps.
- Score: 15.344503991760275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fault-Aware Training (FAT) has emerged as a highly effective technique for
addressing permanent faults in DNN accelerators, as it offers fault mitigation
without significant performance or accuracy loss, specifically at low and
moderate fault rates. However, it leads to very high retraining overheads,
especially when used for large DNNs designed for complex AI applications.
Moreover, as each fabricated chip can have a distinct fault pattern, FAT is
required to be performed for each faulty chip individually, considering its
unique fault map, which further aggravates the problem. To reduce the overheads
of FAT while maintaining its benefits, we propose (1) the concepts of
resilience-driven retraining amount selection, and (2) resilience-driven
grouping and fusion of multiple fault maps (belonging to different chips) to
perform consolidated retraining for a group of faulty chips. To realize these
concepts, in this work, we present a novel framework, eFAT, that computes the
resilience of a given DNN to faults at different fault rates and with different
levels of retraining, and it uses that knowledge to build a resilience map
given a user-defined accuracy constraint. Then, it uses the resilience map to
compute the amount of retraining required for each chip, considering its unique
fault map. Afterward, it performs resilience and reward-driven grouping and
fusion of fault maps to further reduce the number of retraining iterations
required for tuning the given DNN for the given set of faulty chips. We
demonstrate the effectiveness of our framework for a systolic array-based DNN
accelerator experiencing permanent faults in the computational array. Our
extensive results for numerous chips show that the proposed technique
significantly reduces the retraining cost when used for tuning a DNN for
multiple faulty chips.
Related papers
- Algorithmic Strategies for Sustainable Reuse of Neural Network Accelerators with Permanent Faults [9.89051364546275]
We propose novel approaches that quantify permanent hardware faults in neural network (NN) accelerators by uniquely integrating the behavior of the faulty component instead of bypassing it.
We propose several algorithmic mitigation techniques for a subset of stuck-at faults, such as Invertible Scaling or Shifting of activations and weights, or fine tuning with the faulty behavior.
Notably, the proposed techniques do not require any hardware modification, instead relying on existing components of widely used systolic array based accelerators.
arXiv Detail & Related papers (2024-12-17T18:56:09Z) - TSB: Tiny Shared Block for Efficient DNN Deployment on NVCIM Accelerators [11.496631244103773]
"Tiny Shared Block (TSB)" integrates a small shared 1x1 convolution block into the Deep Neural Network architecture.
TSB achieves over 20x inference accuracy gap improvement, over 5x training speedup, and weights-to-device mapping cost reduction.
arXiv Detail & Related papers (2024-05-08T20:53:38Z) - Special Session: Approximation and Fault Resiliency of DNN Accelerators [0.9126382223122612]
This paper explores the approximation and fault resiliency of Deep Neural Network accelerators.
We propose to use approximate (AxC) arithmetic circuits to emulate errors in hardware without performing fault injection on the DNN.
We also propose a fine-grain analysis of fault resiliency by examining fault propagation and masking in networks.
arXiv Detail & Related papers (2023-05-31T19:27:45Z) - RescueSNN: Enabling Reliable Executions on Spiking Neural Network
Accelerators under Permanent Faults [15.115813664357436]
RescueSNN is a novel methodology to mitigate permanent faults in the compute engine of SNN chips.
RescueSNN improves accuracy by up to 80% while maintaining the throughput reduction below 25% in high fault rate.
arXiv Detail & Related papers (2023-04-08T15:24:57Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Fault-Aware Design and Training to Enhance DNNs Reliability with
Zero-Overhead [67.87678914831477]
Deep Neural Networks (DNNs) enable a wide series of technological advancements.
Recent findings indicate that transient hardware faults may corrupt the models prediction dramatically.
In this work, we propose to tackle the reliability issue both at training and model design time.
arXiv Detail & Related papers (2022-05-28T13:09:30Z) - FitAct: Error Resilient Deep Neural Networks via Fine-Grained
Post-Trainable Activation Functions [0.05249805590164901]
Deep neural networks (DNNs) are increasingly being deployed in safety-critical systems such as personal healthcare devices and self-driving cars.
In this paper, we propose FitAct, a low-cost approach to enhance the error resilience of DNNs by deploying fine-grained post-trainable activation functions.
arXiv Detail & Related papers (2021-12-27T07:07:50Z) - Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
Mobile Acceleration [71.80326738527734]
We propose a general, fine-grained structured pruning scheme and corresponding compiler optimizations.
We show that our pruning scheme mapping methods, together with the general fine-grained structured pruning scheme, outperform the state-of-the-art DNN optimization framework.
arXiv Detail & Related papers (2021-11-22T23:53:14Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - FracTrain: Fractionally Squeezing Bit Savings Both Temporally and
Spatially for Efficient DNN Training [81.85361544720885]
We propose FracTrain that integrates progressive fractional quantization which gradually increases the precision of activations, weights, and gradients.
FracTrain reduces computational cost and hardware-quantified energy/latency of DNN training while achieving a comparable or better (-0.12%+1.87%) accuracy.
arXiv Detail & Related papers (2020-12-24T05:24:10Z) - FAT: Training Neural Networks for Reliable Inference Under Hardware
Faults [3.191587417198382]
We present a novel methodology called fault-aware training (FAT), which includes error modeling during neural network (NN) training, to make QNNs resilient to specific fault models on the device.
FAT has been validated for numerous classification tasks including CIFAR10, GTSRB, SVHN and ImageNet.
arXiv Detail & Related papers (2020-11-11T16:09:39Z) - Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive
Meta-Pruning [83.59005356327103]
A common limitation of most existing pruning techniques is that they require pre-training of the network at least once before pruning.
We propose STAMP, which task-adaptively prunes a network pretrained on a large reference dataset by generating a pruning mask on it as a function of the target dataset.
We validate STAMP against recent advanced pruning methods on benchmark datasets.
arXiv Detail & Related papers (2020-06-22T10:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.