Entropy-Based Modeling for Estimating Soft Errors Impact on Binarized
Neural Network Inference
- URL: http://arxiv.org/abs/2004.05089v2
- Date: Tue, 21 Apr 2020 14:01:53 GMT
- Title: Entropy-Based Modeling for Estimating Soft Errors Impact on Binarized
Neural Network Inference
- Authors: Navid Khoshavi, Saman Sargolzaei, Arman Roohi, Connor Broyles, Yu Bi
- Abstract summary: We present the relatively-accurate statistical models to delineate the impact of both undertaken single-event upset (SEU) and multi-bit upset (MBU) across layers and per each layer of the selected convolution neural network.
These models can be used for evaluating the error-resiliency magnitude of NN topology before adopting them in the safety-critical applications.
- Score: 2.249916681499244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over past years, the easy accessibility to the large scale datasets has
significantly shifted the paradigm for developing highly accurate prediction
models that are driven from Neural Network (NN). These models can be
potentially impacted by the radiation-induced transient faults that might lead
to the gradual downgrade of the long-running expected NN inference accelerator.
The crucial observation from our rigorous vulnerability assessment on the NN
inference accelerator demonstrates that the weights and activation functions
are unevenly susceptible to both single-event upset (SEU) and multi-bit upset
(MBU), especially in the first five layers of our selected convolution neural
network. In this paper, we present the relatively-accurate statistical models
to delineate the impact of both undertaken SEU and MBU across layers and per
each layer of the selected NN. These models can be used for evaluating the
error-resiliency magnitude of NN topology before adopting them in the
safety-critical applications.
Related papers
- Hybridization of Persistent Homology with Neural Networks for Time-Series Prediction: A Case Study in Wave Height [0.0]
We introduce a feature engineering method that enhances the predictive performance of neural network models.
Specifically, we leverage computational topology techniques to derive valuable topological features from input data.
For time-ahead predictions, the enhancements in $R2$ score were significant for FNNs, RNNs, LSTM, and GRU models.
arXiv Detail & Related papers (2024-09-03T01:26:21Z) - Bayesian Entropy Neural Networks for Physics-Aware Prediction [14.705526856205454]
We introduce BENN, a framework designed to impose constraints on Bayesian Neural Network (BNN) predictions.
Benn is capable of constraining not only the predicted values but also their derivatives and variances, ensuring a more robust and reliable model output.
Results highlight significant improvements over traditional BNNs and showcase competitive performance relative to contemporary constrained deep learning methods.
arXiv Detail & Related papers (2024-07-01T07:00:44Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Amortised Inference in Bayesian Neural Networks [0.0]
We introduce the Amortised Pseudo-Observation Variational Inference Bayesian Neural Network (APOVI-BNN)
We show that the amortised inference is of similar or better quality to those obtained through traditional variational inference.
We then discuss how the APOVI-BNN may be viewed as a new member of the neural process family.
arXiv Detail & Related papers (2023-09-06T14:02:33Z) - Data efficiency and extrapolation trends in neural network interatomic
potentials [0.0]
We show how architectural and optimization choices influence the generalization of neural network interatomic potentials (NNIPs)
We show that test errors in NNIP follow a scaling relation and can be robust to noise, but cannot predict MD stability in the high-accuracy regime.
Our work provides a deep learning justification for the extrapolation performance of many common NNIPs.
arXiv Detail & Related papers (2023-02-12T00:34:05Z) - Statistical Modeling of Soft Error Influence on Neural Networks [12.298356981085316]
We develop a series of statistical models to analyze the behavior of NN models under soft errors in general.
The statistical models reveal not only the correlation between soft errors and NN model accuracy, but also how NN parameters such as quantization and architecture affect the reliability of NNs.
arXiv Detail & Related papers (2022-10-12T02:28:21Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - Phase Detection with Neural Networks: Interpreting the Black Box [58.720142291102135]
Neural networks (NNs) usually hinder any insight into the reasoning behind their predictions.
We demonstrate how influence functions can unravel the black box of NN when trained to predict the phases of the one-dimensional extended spinless Fermi-Hubbard model at half-filling.
arXiv Detail & Related papers (2020-04-09T17:45:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.