APPRAISER: DNN Fault Resilience Analysis Employing Approximation Errors
- URL: http://arxiv.org/abs/2305.19733v1
- Date: Wed, 31 May 2023 10:53:46 GMT
- Title: APPRAISER: DNN Fault Resilience Analysis Employing Approximation Errors
- Authors: Mahdi Taheri, Mohammad Hasan Ahmadilivani, Maksim Jenihhin, Masoud
Daneshtalab, and Jaan Raik
- Abstract summary: Deep Neural Networks (DNNs) in safety-critical applications raise new reliability concerns.
State-of-the-art methods for fault injection by emulation incur a spectrum of time-, design- and control-complexity problems.
APPRAISER is proposed that applies functional approximation for a non-conventional purpose and employs approximate computing errors.
- Score: 1.1091582432763736
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Nowadays, the extensive exploitation of Deep Neural Networks (DNNs) in
safety-critical applications raises new reliability concerns. In practice,
methods for fault injection by emulation in hardware are efficient and widely
used to study the resilience of DNN architectures for mitigating reliability
issues already at the early design stages. However, the state-of-the-art
methods for fault injection by emulation incur a spectrum of time-, design- and
control-complexity problems. To overcome these issues, a novel resiliency
assessment method called APPRAISER is proposed that applies functional
approximation for a non-conventional purpose and employs approximate computing
errors for its interest. By adopting this concept in the resiliency assessment
domain, APPRAISER provides thousands of times speed-up in the assessment
process, while keeping high accuracy of the analysis. In this paper, APPRAISER
is validated by comparing it with state-of-the-art approaches for fault
injection by emulation in FPGA. By this, the feasibility of the idea is
demonstrated, and a new perspective in resiliency evaluation for DNNs is
opened.
Related papers
- Bridging Internal Probability and Self-Consistency for Effective and Efficient LLM Reasoning [53.25336975467293]
We present the first theoretical error decomposition analysis of methods such as perplexity and self-consistency.
Our analysis reveals a fundamental trade-off: perplexity methods suffer from substantial model error due to the absence of a proper consistency function.
We propose Reasoning-Pruning Perplexity Consistency (RPC), which integrates perplexity with self-consistency, and Reasoning Pruning, which eliminates low-probability reasoning paths.
arXiv Detail & Related papers (2025-02-01T18:09:49Z) - Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspective [1.474723404975345]
This paper delves into the robustness assessment in embedded Deep Neural Networks (DNNs)
By scrutinizing the layer-by-layer and bit-by-bit sensitivity of various encoder-decoder models to soft errors, this study thoroughly investigates the vulnerability of segmentation DNNs to SEUs.
We propose a set of practical lightweight error mitigation techniques with no memory or computational cost suitable for resource-constrained deployments.
arXiv Detail & Related papers (2024-12-04T18:28:38Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Scalable and Efficient Methods for Uncertainty Estimation and Reduction
in Deep Learning [0.0]
This paper explores scalable and efficient methods for uncertainty estimation and reduction in deep learning.
We tackle the inherent uncertainties arising from out-of-distribution inputs and hardware non-idealities.
Our approach encompasses problem-aware training algorithms, novel NN topologies, and hardware co-design solutions.
arXiv Detail & Related papers (2024-01-13T19:30:34Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Special Session: Approximation and Fault Resiliency of DNN Accelerators [0.9126382223122612]
This paper explores the approximation and fault resiliency of Deep Neural Network accelerators.
We propose to use approximate (AxC) arithmetic circuits to emulate errors in hardware without performing fault injection on the DNN.
We also propose a fine-grain analysis of fault resiliency by examining fault propagation and masking in networks.
arXiv Detail & Related papers (2023-05-31T19:27:45Z) - DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability
Assessment [1.189955933770711]
Deep Neural Networks (DNNs) and their accelerators are being deployed more frequently in safety-critical applications.
We propose a novel accurate, fine-grain, metric-oriented, and accelerator-agnostic method called DeepVigor.
arXiv Detail & Related papers (2023-03-13T08:55:10Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Multi-Loss Sub-Ensembles for Accurate Classification with Uncertainty
Estimation [1.2891210250935146]
We propose an efficient method for uncertainty estimation in deep neural networks (DNNs) achieving high accuracy.
We keep our inference time relatively low by leveraging the advantage proposed by the Deep-Sub-Ensembles method.
Our results show improved accuracy on the classification task and competitive results on several uncertainty measures.
arXiv Detail & Related papers (2020-10-05T10:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.