DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability
Assessment
- URL: http://arxiv.org/abs/2303.06931v1
- Date: Mon, 13 Mar 2023 08:55:10 GMT
- Title: DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability
Assessment
- Authors: Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud
Daneshtalab, Maksim Jenihhin
- Abstract summary: Deep Neural Networks (DNNs) and their accelerators are being deployed more frequently in safety-critical applications.
We propose a novel accurate, fine-grain, metric-oriented, and accelerator-agnostic method called DeepVigor.
- Score: 1.189955933770711
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep Neural Networks (DNNs) and their accelerators are being deployed ever
more frequently in safety-critical applications leading to increasing
reliability concerns. A traditional and accurate method for assessing DNNs'
reliability has been resorting to fault injection, which, however, suffers from
prohibitive time complexity. While analytical and hybrid fault
injection-/analytical-based methods have been proposed, they are either
inaccurate or specific to particular accelerator architectures. In this work,
we propose a novel accurate, fine-grain, metric-oriented, and
accelerator-agnostic method called DeepVigor that provides vulnerability value
ranges for DNN neurons' outputs. An outcome of DeepVigor is an analytical model
representing vulnerable and non-vulnerable ranges for each neuron that can be
exploited to develop different techniques for improving DNNs' reliability.
Moreover, DeepVigor provides reliability assessment metrics based on
vulnerability factors for bits, neurons, and layers using the vulnerability
ranges. The proposed method is not only faster than fault injection but also
provides extensive and accurate information about the reliability of DNNs,
independent from the accelerator. The experimental evaluations in the paper
indicate that the proposed vulnerability ranges are 99.9% to 100% accurate even
when evaluated on previously unseen test data. Also, it is shown that the
obtained vulnerability factors represent the criticality of bits, neurons, and
layers proficiently. DeepVigor is implemented in the PyTorch framework and
validated on complex DNN benchmarks.
Related papers
- DeepVigor+: Scalable and Accurate Semi-Analytical Fault Resilience Analysis for Deep Neural Network [0.4999814847776098]
We introduce DeepVigor+, a scalable, fast and accurate semi-analytical method as an efficient alternative for reliability measurement in DNNs.
The results indicate that DeepVigor+ obtains Vulnerability Factors (VFs) for DNN models with an error less than 1% and 14.9 up to 26.9 times fewer simulations than the best-known state-of-the-art statistical FI.
arXiv Detail & Related papers (2024-10-21T08:01:08Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Enhancing Fault Resilience of QNNs by Selective Neuron Splitting [1.1091582432763736]
Quantized Neural Networks (QNNs) have emerged to tackle the complexity of Deep Neural Networks (DNNs)
In this paper, a recent analytical resilience assessment method is adapted for QNNs to identify critical neurons based on a Neuron Vulnerability Factor (NVF)
A novel method for splitting the critical neurons is proposed that enables the design of a Lightweight Correction Unit (LCU) in the accelerator without redesigning its computational part.
arXiv Detail & Related papers (2023-06-16T17:11:55Z) - APPRAISER: DNN Fault Resilience Analysis Employing Approximation Errors [1.1091582432763736]
Deep Neural Networks (DNNs) in safety-critical applications raise new reliability concerns.
State-of-the-art methods for fault injection by emulation incur a spectrum of time-, design- and control-complexity problems.
APPRAISER is proposed that applies functional approximation for a non-conventional purpose and employs approximate computing errors.
arXiv Detail & Related papers (2023-05-31T10:53:46Z) - A Systematic Literature Review on Hardware Reliability Assessment
Methods for Deep Neural Networks [1.189955933770711]
The reliability of Deep Neural Networks (DNNs) is an essential subject of research.
In recent years, several studies have been published accordingly to assess the reliability of DNNs.
In this work, we conduct a Systematic Literature Review (SLR) on the reliability assessment methods of DNNs.
arXiv Detail & Related papers (2023-05-09T20:08:30Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Thales: Formulating and Estimating Architectural Vulnerability Factors
for DNN Accelerators [6.8082132475259405]
This paper focuses on quantifying the accuracy given that a transient error has occurred, which tells us how well a network behaves when a transient error occurs.
We show that existing Resiliency Accuracy (RA) formulation is fundamentally inaccurate, because it incorrectly assumes that software variables have equal faulty probability under hardware transient faults.
We present an algorithm that captures the faulty probabilities of DNN variables under transient faults and, thus, provides correct RA estimations validated by hardware.
arXiv Detail & Related papers (2022-12-05T23:16:20Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.