Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using
Stochastic Scale
- URL: http://arxiv.org/abs/2311.15816v2
- Date: Thu, 11 Jan 2024 12:46:57 GMT
- Title: Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using
Stochastic Scale
- Authors: Soyed Tuhin Ahmed, Kamal Danouchi, Michael Hefenbrock, Guillaume
Prenat, Lorena Anghel, Mehdi B. Tahoori
- Abstract summary: Uncertainty estimation in Neural Networks (NNs) is vital in improving reliability and confidence in predictions, particularly in safety-critical applications.
BayNNs with Dropout as an approximation offer a systematic approach to uncertainty, but they inherently suffer from high hardware overhead in terms of power, memory, and quantifying.
We introduce a novel Spintronic memory-based CIM architecture for the proposed BayNN that achieves more than $100times$ energy savings compared to the state-of-the-art.
- Score: 0.7025445595542577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty estimation in Neural Networks (NNs) is vital in improving
reliability and confidence in predictions, particularly in safety-critical
applications. Bayesian Neural Networks (BayNNs) with Dropout as an
approximation offer a systematic approach to quantifying uncertainty, but they
inherently suffer from high hardware overhead in terms of power, memory, and
computation. Thus, the applicability of BayNNs to edge devices with limited
resources or to high-performance applications is challenging. Some of the
inherent costs of BayNNs can be reduced by accelerating them in hardware on a
Computation-In-Memory (CIM) architecture with spintronic memories and
binarizing their parameters. However, numerous stochastic units are required to
implement conventional dropout-based BayNN. In this paper, we propose the Scale
Dropout, a novel regularization technique for Binary Neural Networks (BNNs),
and Monte Carlo-Scale Dropout (MC-Scale Dropout)-based BayNNs for efficient
uncertainty estimation. Our approach requires only one stochastic unit for the
entire model, irrespective of the model size, leading to a highly scalable
Bayesian NN. Furthermore, we introduce a novel Spintronic memory-based CIM
architecture for the proposed BayNN that achieves more than $100\times$ energy
savings compared to the state-of-the-art. We validated our method to show up to
a $1\%$ improvement in predictive performance and superior uncertainty
estimates compared to related works.
Related papers
- Bayesian Entropy Neural Networks for Physics-Aware Prediction [14.705526856205454]
We introduce BENN, a framework designed to impose constraints on Bayesian Neural Network (BNN) predictions.
Benn is capable of constraining not only the predicted values but also their derivatives and variances, ensuring a more robust and reliable model output.
Results highlight significant improvements over traditional BNNs and showcase competitive performance relative to contemporary constrained deep learning methods.
arXiv Detail & Related papers (2024-07-01T07:00:44Z) - Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Enhancing Reliability of Neural Networks at the Edge: Inverted
Normalization with Stochastic Affine Transformations [0.22499166814992438]
We propose a method to inherently enhance the robustness and inference accuracy of BayNNs deployed in in-memory computing architectures.
Empirical results show a graceful degradation in inference accuracy, with an improvement of up to $58.11%$.
arXiv Detail & Related papers (2024-01-23T00:27:31Z) - Testing Spintronics Implemented Monte Carlo Dropout-Based Bayesian
Neural Networks [0.7537220883022466]
Neural Networks (BayNNs) can inherently estimate predictive uncertainty, facilitating informed decision-making.
Dropout-based BayNNs are increasingly implemented in spintronics-based computation-in-memory architectures for resource-constrained yet high-performance safety-critical applications.
We present for the first time the model of the non-idealities of the spintronics-based Dropout module and analyze their impact on uncertainty estimates and accuracy.
arXiv Detail & Related papers (2024-01-09T09:42:27Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Spatial-SpinDrop: Spatial Dropout-based Binary Bayesian Neural Network
with Spintronics Implementation [1.3603499630771996]
We introduce MC-SpatialDropout, a spatial dropout-based approximate BayNNs with spintronics emerging devices.
The number of dropout modules per network layer is reduced by a factor of $9times$ and energy consumption by a factor of $94.11times$, while still achieving comparable predictive performance and uncertainty estimates.
arXiv Detail & Related papers (2023-06-16T21:38:13Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.