Exploring Fault-Energy Trade-offs in Approximate DNN Hardware
Accelerators
- URL: http://arxiv.org/abs/2101.02860v1
- Date: Fri, 8 Jan 2021 05:52:12 GMT
- Title: Exploring Fault-Energy Trade-offs in Approximate DNN Hardware
Accelerators
- Authors: Ayesha Siddique, Kanad Basu, Khaza Anuarul Hoque
- Abstract summary: We present an extensive layer-wise and bit-wise fault resilience and energy analysis of different AxDNNs.
Our results demonstrate that the fault resilience in AxDNNs is to the energy efficiency.
- Score: 2.9649783577150837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Systolic array-based deep neural network (DNN) accelerators have recently
gained prominence for their low computational cost. However, their high energy
consumption poses a bottleneck to their deployment in energy-constrained
devices. To address this problem, approximate computing can be employed at the
cost of some tolerable accuracy loss. However, such small accuracy variations
may increase the sensitivity of DNNs towards undesired subtle disturbances,
such as permanent faults. The impact of permanent faults in accurate DNNs has
been thoroughly investigated in the literature. Conversely, the impact of
permanent faults in approximate DNN accelerators (AxDNNs) is yet
under-explored. The impact of such faults may vary with the fault bit
positions, activation functions and approximation errors in AxDNN layers. Such
dynamacity poses a considerable challenge to exploring the trade-off between
their energy efficiency and fault resilience in AxDNNs. Towards this, we
present an extensive layer-wise and bit-wise fault resilience and energy
analysis of different AxDNNs, using the state-of-the-art Evoapprox8b signed
multipliers. In particular, we vary the stuck-at-0, stuck-at-1 fault-bit
positions, and activation functions to study their impact using the most widely
used MNIST and Fashion-MNIST datasets. Our quantitative analysis shows that the
permanent faults exacerbate the accuracy loss in AxDNNs when compared to the
accurate DNN accelerators. For instance, a permanent fault in AxDNNs can lead
up to 66\% accuracy loss, whereas the same faulty bit can lead to only 9\%
accuracy loss in an accurate DNN accelerator. Our results demonstrate that the
fault resilience in AxDNNs is orthogonal to the energy efficiency.
Related papers
- Special Session: Approximation and Fault Resiliency of DNN Accelerators [0.9126382223122612]
This paper explores the approximation and fault resiliency of Deep Neural Network accelerators.
We propose to use approximate (AxC) arithmetic circuits to emulate errors in hardware without performing fault injection on the DNN.
We also propose a fine-grain analysis of fault resiliency by examining fault propagation and masking in networks.
arXiv Detail & Related papers (2023-05-31T19:27:45Z) - RescueSNN: Enabling Reliable Executions on Spiking Neural Network
Accelerators under Permanent Faults [15.115813664357436]
RescueSNN is a novel methodology to mitigate permanent faults in the compute engine of SNN chips.
RescueSNN improves accuracy by up to 80% while maintaining the throughput reduction below 25% in high fault rate.
arXiv Detail & Related papers (2023-04-08T15:24:57Z) - Dynamics-Aware Loss for Learning with Label Noise [73.75129479936302]
Label noise poses a serious threat to deep neural networks (DNNs)
We propose a dynamics-aware loss (DAL) to solve this problem.
Both the detailed theoretical analyses and extensive experimental results demonstrate the superiority of our method.
arXiv Detail & Related papers (2023-03-21T03:05:21Z) - Thales: Formulating and Estimating Architectural Vulnerability Factors
for DNN Accelerators [6.8082132475259405]
This paper focuses on quantifying the accuracy given that a transient error has occurred, which tells us how well a network behaves when a transient error occurs.
We show that existing Resiliency Accuracy (RA) formulation is fundamentally inaccurate, because it incorrectly assumes that software variables have equal faulty probability under hardware transient faults.
We present an algorithm that captures the faulty probabilities of DNN variables under transient faults and, thus, provides correct RA estimations validated by hardware.
arXiv Detail & Related papers (2022-12-05T23:16:20Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - Fault-Aware Design and Training to Enhance DNNs Reliability with
Zero-Overhead [67.87678914831477]
Deep Neural Networks (DNNs) enable a wide series of technological advancements.
Recent findings indicate that transient hardware faults may corrupt the models prediction dramatically.
In this work, we propose to tackle the reliability issue both at training and model design time.
arXiv Detail & Related papers (2022-05-28T13:09:30Z) - SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network
Accelerators under Soft Errors [15.115813664357436]
SoftSNN is a novel methodology to mitigate soft errors in the weight registers (synapses) and neurons of SNN accelerators without re-execution.
For a 900-neuron network with even a high fault rate, our SoftSNN maintains the accuracy degradation below 3%, while reducing latency and energy by up to 3x and 2.3x respectively.
arXiv Detail & Related papers (2022-03-10T18:20:28Z) - Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
DNN Accelerators [105.60654479548356]
We show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly.
This leads to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators.
arXiv Detail & Related papers (2021-04-16T19:11:14Z) - Bit Error Robustness for Energy-Efficient DNN Accelerators [93.58572811484022]
We show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors.
This leads to high energy savings from both low-voltage operation as well as low-precision quantization.
arXiv Detail & Related papers (2020-06-24T18:23:10Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - A Low-cost Fault Corrector for Deep Neural Networks through Range
Restriction [1.8907108368038215]
Deep neural networks (DNNs) in safety-critical domains have engendered serious reliability concerns.
This work proposes Ranger, a low-cost fault corrector, which directly rectifies the faulty output due to transient faults without re-computation.
arXiv Detail & Related papers (2020-03-30T23:53:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.