Testing Spintronics Implemented Monte Carlo Dropout-Based Bayesian
Neural Networks
- URL: http://arxiv.org/abs/2401.04744v1
- Date: Tue, 9 Jan 2024 09:42:27 GMT
- Title: Testing Spintronics Implemented Monte Carlo Dropout-Based Bayesian
Neural Networks
- Authors: Soyed Tuhin Ahmed, Michael Hefenbrock, Guillaume Prenat, Lorena
Anghel, Mehdi B. Tahoori
- Abstract summary: Neural Networks (BayNNs) can inherently estimate predictive uncertainty, facilitating informed decision-making.
Dropout-based BayNNs are increasingly implemented in spintronics-based computation-in-memory architectures for resource-constrained yet high-performance safety-critical applications.
We present for the first time the model of the non-idealities of the spintronics-based Dropout module and analyze their impact on uncertainty estimates and accuracy.
- Score: 0.7537220883022466
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Bayesian Neural Networks (BayNNs) can inherently estimate predictive
uncertainty, facilitating informed decision-making. Dropout-based BayNNs are
increasingly implemented in spintronics-based computation-in-memory
architectures for resource-constrained yet high-performance safety-critical
applications. Although uncertainty estimation is important, the reliability of
Dropout generation and BayNN computation is equally important for target
applications but is overlooked in existing works. However, testing BayNNs is
significantly more challenging compared to conventional NNs, due to their
stochastic nature. In this paper, we present for the first time the model of
the non-idealities of the spintronics-based Dropout module and analyze their
impact on uncertainty estimates and accuracy. Furthermore, we propose a testing
framework based on repeatability ranking for Dropout-based BayNN with up to
$100\%$ fault coverage while using only $0.2\%$ of training data as test
vectors.
Related papers
- Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using
Stochastic Scale [0.7025445595542577]
Uncertainty estimation in Neural Networks (NNs) is vital in improving reliability and confidence in predictions, particularly in safety-critical applications.
BayNNs with Dropout as an approximation offer a systematic approach to uncertainty, but they inherently suffer from high hardware overhead in terms of power, memory, and quantifying.
We introduce a novel Spintronic memory-based CIM architecture for the proposed BayNN that achieves more than $100times$ energy savings compared to the state-of-the-art.
arXiv Detail & Related papers (2023-11-27T13:41:20Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - Random-Set Neural Networks (RS-NN) [4.549947259731147]
We propose a novel Random-Set Neural Network (RS-NN) for classification.
RS-NN predicts belief functions rather than probability vectors over a set of classes.
It encodes the 'epistemic' uncertainty induced in machine learning by limited training sets.
arXiv Detail & Related papers (2023-07-11T20:00:35Z) - Spatial-SpinDrop: Spatial Dropout-based Binary Bayesian Neural Network
with Spintronics Implementation [1.3603499630771996]
We introduce MC-SpatialDropout, a spatial dropout-based approximate BayNNs with spintronics emerging devices.
The number of dropout modules per network layer is reduced by a factor of $9times$ and energy consumption by a factor of $94.11times$, while still achieving comparable predictive performance and uncertainty estimates.
arXiv Detail & Related papers (2023-06-16T21:38:13Z) - UPNet: Uncertainty-based Picking Deep Learning Network for Robust First Break Picking [6.380128763476294]
First break (FB) picking is a crucial aspect in the determination of subsurface velocity models.
Deep neural networks (DNNs) have been proposed to accelerate this processing.
We introduce uncertainty quantification into the FB picking task and propose a novel uncertainty-based deep learning network called UPNet.
arXiv Detail & Related papers (2023-05-23T08:13:09Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.