Enhancing Reliability of Neural Networks at the Edge: Inverted
Normalization with Stochastic Affine Transformations
- URL: http://arxiv.org/abs/2401.12416v1
- Date: Tue, 23 Jan 2024 00:27:31 GMT
- Title: Enhancing Reliability of Neural Networks at the Edge: Inverted
Normalization with Stochastic Affine Transformations
- Authors: Soyed Tuhin Ahmed, Kamal Danouchi, Guillaume Prenat, Lorena Anghel,
Mehdi B. Tahoori
- Abstract summary: We propose a method to inherently enhance the robustness and inference accuracy of BayNNs deployed in in-memory computing architectures.
Empirical results show a graceful degradation in inference accuracy, with an improvement of up to $58.11%$.
- Score: 0.22499166814992438
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Bayesian Neural Networks (BayNNs) naturally provide uncertainty in their
predictions, making them a suitable choice in safety-critical applications.
Additionally, their realization using memristor-based in-memory computing (IMC)
architectures enables them for resource-constrained edge applications. In
addition to predictive uncertainty, however, the ability to be inherently
robust to noise in computation is also essential to ensure functional safety.
In particular, memristor-based IMCs are susceptible to various sources of
non-idealities such as manufacturing and runtime variations, drift, and
failure, which can significantly reduce inference accuracy. In this paper, we
propose a method to inherently enhance the robustness and inference accuracy of
BayNNs deployed in IMC architectures. To achieve this, we introduce a novel
normalization layer combined with stochastic affine transformations. Empirical
results in various benchmark datasets show a graceful degradation in inference
accuracy, with an improvement of up to $58.11\%$.
Related papers
- Variational Bayesian Bow tie Neural Networks with Shrinkage [0.276240219662896]
We build a relaxed version of the standard feed-forward rectified neural network.
We employ Polya-Gamma data augmentation tricks to render a conditionally linear and Gaussian model.
We derive a variational inference algorithm that avoids distributional assumptions and independence across layers.
arXiv Detail & Related papers (2024-11-17T17:36:30Z) - Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using
Stochastic Scale [0.7025445595542577]
Uncertainty estimation in Neural Networks (NNs) is vital in improving reliability and confidence in predictions, particularly in safety-critical applications.
BayNNs with Dropout as an approximation offer a systematic approach to uncertainty, but they inherently suffer from high hardware overhead in terms of power, memory, and quantifying.
We introduce a novel Spintronic memory-based CIM architecture for the proposed BayNN that achieves more than $100times$ energy savings compared to the state-of-the-art.
arXiv Detail & Related papers (2023-11-27T13:41:20Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Negative Feedback Training: A Novel Concept to Improve Robustness of NVCIM DNN Accelerators [11.832487701641723]
Non-volatile memory (NVM) devices excel in energy efficiency and latency when performing Deep Neural Network (DNN) inference.
We propose a novel training concept: Negative Feedback Training (NFT) leveraging the multi-scale noisy information captured from network.
Our methods outperform existing state-of-the-art methods with up to a 46.71% improvement in inference accuracy.
arXiv Detail & Related papers (2023-05-23T22:56:26Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Scalable Uncertainty for Computer Vision with Functional Variational
Inference [18.492485304537134]
We leverage the formulation of variational inference in function space.
We obtain predictive uncertainty estimates at the cost of a single forward pass through any chosen CNN architecture.
We propose numerically efficient algorithms which enable fast training in the context of high-dimensional tasks.
arXiv Detail & Related papers (2020-03-06T19:09:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.