HarDNN: Feature Map Vulnerability Evaluation in CNNs
- URL: http://arxiv.org/abs/2002.09786v2
- Date: Tue, 25 Feb 2020 11:07:36 GMT
- Title: HarDNN: Feature Map Vulnerability Evaluation in CNNs
- Authors: Abdulrahman Mahmoud, Siva Kumar Sastry Hari, Christopher W. Fletcher,
Sarita V. Adve, Charbel Sakr, Naresh Shanbhag, Pavlo Molchanov, Michael B.
Sullivan, Timothy Tsai, Stephen W. Keckler
- Abstract summary: This paper presents HarDNN, a software-directed approach to identify vulnerable computations during a CNN inference.
We show that HarDNN can accurately estimate relative vulnerability of a feature map (fmap) in CNNs using a statistical error injection campaign.
Results show that the improvement in resilience for the added computation is superlinear with HarDNN.
- Score: 23.24111155295923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Convolutional Neural Networks (CNNs) are increasingly being employed in
safety-critical applications, it is important that they behave reliably in the
face of hardware errors. Transient hardware errors may percolate undesirable
state during execution, resulting in software-manifested errors which can
adversely affect high-level decision making. This paper presents HarDNN, a
software-directed approach to identify vulnerable computations during a CNN
inference and selectively protect them based on their propensity towards
corrupting the inference output in the presence of a hardware error. We show
that HarDNN can accurately estimate relative vulnerability of a feature map
(fmap) in CNNs using a statistical error injection campaign, and explore
heuristics for fast vulnerability assessment. Based on these results, we
analyze the tradeoff between error coverage and computational overhead that the
system designers can use to employ selective protection. Results show that the
improvement in resilience for the added computation is superlinear with HarDNN.
For example, HarDNN improves SqueezeNet's resilience by 10x with just 30%
additional computations.
Related papers
- Special Session: Approximation and Fault Resiliency of DNN Accelerators [0.9126382223122612]
This paper explores the approximation and fault resiliency of Deep Neural Network accelerators.
We propose to use approximate (AxC) arithmetic circuits to emulate errors in hardware without performing fault injection on the DNN.
We also propose a fine-grain analysis of fault resiliency by examining fault propagation and masking in networks.
arXiv Detail & Related papers (2023-05-31T19:27:45Z) - DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability
Assessment [1.189955933770711]
Deep Neural Networks (DNNs) and their accelerators are being deployed more frequently in safety-critical applications.
We propose a novel accurate, fine-grain, metric-oriented, and accelerator-agnostic method called DeepVigor.
arXiv Detail & Related papers (2023-03-13T08:55:10Z) - CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for
Emerging Memories-Based Deep Neural Networks [7.566423455230909]
Deep Neural Networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications.
This paper proposes CRAFT, i.e., Criticality-Aware Fault-Tolerance Enhancement Techniques to enhance the reliability of NVM-based DNNs.
arXiv Detail & Related papers (2023-02-08T03:39:11Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Thales: Formulating and Estimating Architectural Vulnerability Factors
for DNN Accelerators [6.8082132475259405]
This paper focuses on quantifying the accuracy given that a transient error has occurred, which tells us how well a network behaves when a transient error occurs.
We show that existing Resiliency Accuracy (RA) formulation is fundamentally inaccurate, because it incorrectly assumes that software variables have equal faulty probability under hardware transient faults.
We present an algorithm that captures the faulty probabilities of DNN variables under transient faults and, thus, provides correct RA estimations validated by hardware.
arXiv Detail & Related papers (2022-12-05T23:16:20Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Bayesian x-vector: Bayesian Neural Network based x-vector System for
Speaker Verification [71.45033077934723]
We incorporate Bayesian neural networks (BNNs) into the deep neural network (DNN) x-vector speaker verification system.
With the weight uncertainty modeling provided by BNNs, we expect the system could generalize better on the evaluation data.
Results show that the system could benefit from BNNs by a relative EER decrease of 2.66% and 2.32% respectively for short- and long-utterance in-domain evaluations.
arXiv Detail & Related papers (2020-04-08T14:35:12Z) - FT-CNN: Algorithm-Based Fault Tolerance for Convolutional Neural
Networks [13.100954947774163]
Convolutional neural networks (CNNs) are becoming more and more important for solving challenging and critical problems in many fields.
CNN inference applications have been deployed in safety-critical systems, which may suffer from soft errors caused by high-energy particles, high temperature, or abnormal voltage.
Traditional fault tolerance methods are not suitable for CNN inference because error-correcting code is unable to protect computational components.
arXiv Detail & Related papers (2020-03-27T02:01:54Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.