DeepVigor+: Scalable and Accurate Semi-Analytical Fault Resilience Analysis for Deep Neural Network
- URL: http://arxiv.org/abs/2410.15742v1
- Date: Mon, 21 Oct 2024 08:01:08 GMT
- Title: DeepVigor+: Scalable and Accurate Semi-Analytical Fault Resilience Analysis for Deep Neural Network
- Authors: Mohammad Hasan Ahmadilivani, Jaan Raik, Masoud Daneshtalab, Maksim Jenihhin,
- Abstract summary: We introduce DeepVigor+, a scalable, fast and accurate semi-analytical method as an efficient alternative for reliability measurement in DNNs.
The results indicate that DeepVigor+ obtains Vulnerability Factors (VFs) for DNN models with an error less than 1% and 14.9 up to 26.9 times fewer simulations than the best-known state-of-the-art statistical FI.
- Score: 0.4999814847776098
- License:
- Abstract: Growing exploitation of Machine Learning (ML) in safety-critical applications necessitates rigorous safety analysis. Hardware reliability assessment is a major concern with respect to measuring the level of safety. Quantifying the reliability of emerging ML models, including Deep Neural Networks (DNNs), is highly complex due to their enormous size in terms of the number of parameters and computations. Conventionally, Fault Injection (FI) is applied to perform a reliability measurement. However, performing FI on modern-day DNNs is prohibitively time-consuming if an acceptable confidence level is to be achieved. In order to speed up FI for large DNNs, statistical FI has been proposed. However, the run-time for the large DNN models is still considerably long. In this work, we introduce DeepVigor+, a scalable, fast and accurate semi-analytical method as an efficient alternative for reliability measurement in DNNs. DeepVigor+ implements a fault propagation analysis model and attempts to acquire Vulnerability Factors (VFs) as reliability metrics in an optimal way. The results indicate that DeepVigor+ obtains VFs for DNN models with an error less than 1\% and 14.9 up to 26.9 times fewer simulations than the best-known state-of-the-art statistical FI enabling an accurate reliability analysis for emerging DNNs within a few minutes.
Related papers
- Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - APPRAISER: DNN Fault Resilience Analysis Employing Approximation Errors [1.1091582432763736]
Deep Neural Networks (DNNs) in safety-critical applications raise new reliability concerns.
State-of-the-art methods for fault injection by emulation incur a spectrum of time-, design- and control-complexity problems.
APPRAISER is proposed that applies functional approximation for a non-conventional purpose and employs approximate computing errors.
arXiv Detail & Related papers (2023-05-31T10:53:46Z) - DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability
Assessment [1.189955933770711]
Deep Neural Networks (DNNs) and their accelerators are being deployed more frequently in safety-critical applications.
We propose a novel accurate, fine-grain, metric-oriented, and accelerator-agnostic method called DeepVigor.
arXiv Detail & Related papers (2023-03-13T08:55:10Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Thales: Formulating and Estimating Architectural Vulnerability Factors
for DNN Accelerators [6.8082132475259405]
This paper focuses on quantifying the accuracy given that a transient error has occurred, which tells us how well a network behaves when a transient error occurs.
We show that existing Resiliency Accuracy (RA) formulation is fundamentally inaccurate, because it incorrectly assumes that software variables have equal faulty probability under hardware transient faults.
We present an algorithm that captures the faulty probabilities of DNN variables under transient faults and, thus, provides correct RA estimations validated by hardware.
arXiv Detail & Related papers (2022-12-05T23:16:20Z) - Fault-Aware Design and Training to Enhance DNNs Reliability with
Zero-Overhead [67.87678914831477]
Deep Neural Networks (DNNs) enable a wide series of technological advancements.
Recent findings indicate that transient hardware faults may corrupt the models prediction dramatically.
In this work, we propose to tackle the reliability issue both at training and model design time.
arXiv Detail & Related papers (2022-05-28T13:09:30Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Interval Deep Learning for Uncertainty Quantification in Safety
Applications [0.0]
Current deep neural networks (DNNs) do not have an implicit mechanism to quantify and propagate significant input data uncertainty.
We present a DNN optimized with gradient-based methods capable to quantify input and parameter uncertainty by means of interval analysis.
We show that the Deep Interval Neural Network (DINN) can produce accurate bounded estimates from uncertain input data.
arXiv Detail & Related papers (2021-05-13T17:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.