Rethinking Non-idealities in Memristive Crossbars for Adversarial
Robustness in Neural Networks
- URL: http://arxiv.org/abs/2008.11298v2
- Date: Wed, 28 Apr 2021 00:45:20 GMT
- Title: Rethinking Non-idealities in Memristive Crossbars for Adversarial
Robustness in Neural Networks
- Authors: Abhiroop Bhattacharjee and Priyadarshini Panda
- Abstract summary: Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks.
crossbar non-idealities have always been devalued since they cause errors in performing MVMs.
We show that the intrinsic hardware non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization.
- Score: 2.729253370269413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have been shown to be prone to adversarial
attacks. Memristive crossbars, being able to perform
Matrix-Vector-Multiplications (MVMs) efficiently, are used to realize DNNs on
hardware. However, crossbar non-idealities have always been devalued since they
cause errors in performing MVMs, leading to computational accuracy losses in
DNNs. Several software-based defenses have been proposed to make DNNs
adversarially robust. However, no previous work has demonstrated the advantage
conferred by the crossbar non-idealities in unleashing adversarial robustness.
We show that the intrinsic hardware non-idealities yield adversarial robustness
to the mapped DNNs without any additional optimization. We evaluate the
adversarial resilience of state-of-the-art DNNs (VGG8 & VGG16 networks) using
benchmark datasets (CIFAR-10, CIFAR-100 & Tiny Imagenet) across various
crossbar sizes. We find that crossbar non-idealities unleash significantly
greater adversarial robustness (>10-20%) in crossbar-mapped DNNs than baseline
software DNNs. We further assess the performance of our approach with other
state-of-the-art efficiency-driven adversarial defenses and find that our
approach performs significantly well in terms of reducing adversarial loss.
Related papers
- A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural
Architectures for Non-ideal Xbars [2.222917681321253]
This work proposes a two-phase algorithm-hardware co-optimization approach called XploreNAS.
It searches for hardware-efficient & adversarially robust neural architectures for non-ideal crossbar platforms.
Experiments on crossbars with benchmark datasets show upto 8-16% improvement in the adversarial robustness of the searched Subnets.
arXiv Detail & Related papers (2023-02-15T16:44:18Z) - Boosting Adversarial Robustness From The Perspective of Effective Margin
Regularization [58.641705224371876]
The adversarial vulnerability of deep neural networks (DNNs) has been actively investigated in the past several years.
This paper investigates the scale-variant property of cross-entropy loss, which is the most commonly used loss function in classification tasks.
We show that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training.
arXiv Detail & Related papers (2022-10-11T03:16:56Z) - Examining the Robustness of Spiking Neural Networks on Non-ideal
Memristive Crossbars [4.184276171116354]
Spiking Neural Networks (SNNs) have emerged as the low-power alternative to Artificial Neural Networks (ANNs)
We study the effect of crossbar non-idealities and intrinsicity on the performance of SNNs.
arXiv Detail & Related papers (2022-06-20T07:07:41Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Examining and Mitigating the Impact of Crossbar Non-idealities for
Accurate Implementation of Sparse Deep Neural Networks [2.4283778735260686]
We show how sparse Deep Neural Networks (DNNs) can lead to severe accuracy losses compared to unpruned DNNs mapped onto non-ideal crossbars.
We propose two mitigation approaches - Crossbar column rearrangement and Weight-Constrained-Training (WCT)
These help in mitigating non-idealities by increasing the proportion of low conductance synapses on crossbars, thereby improving their computational accuracies.
arXiv Detail & Related papers (2022-01-13T21:56:48Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - On the Noise Stability and Robustness of Adversarially Trained Networks
on NVM Crossbars [6.506883928959601]
We study the design of robust Deep Neural Networks (DNNs) through the amalgamation of adversarial training and intrinsic robustness of NVM crossbar-based analog hardware.
Our results indicate that implementing adversarially trained networks on analog hardware requires careful calibration between hardware non-idealities and $epsilon_train$ for optimum robustness and performance.
arXiv Detail & Related papers (2021-09-19T04:59:39Z) - An Integrated Approach to Produce Robust Models with High Efficiency [9.476463361600828]
Quantization and structure simplification are promising ways to adapt Deep Neural Networks (DNNs) to mobile devices.
In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble.
We design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity.
arXiv Detail & Related papers (2020-08-31T00:44:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.