On the Noise Stability and Robustness of Adversarially Trained Networks
on NVM Crossbars
- URL: http://arxiv.org/abs/2109.09060v2
- Date: Thu, 18 May 2023 21:42:49 GMT
- Title: On the Noise Stability and Robustness of Adversarially Trained Networks
on NVM Crossbars
- Authors: Chun Tao, Deboleena Roy, Indranil Chakraborty, Kaushik Roy
- Abstract summary: We study the design of robust Deep Neural Networks (DNNs) through the amalgamation of adversarial training and intrinsic robustness of NVM crossbar-based analog hardware.
Our results indicate that implementing adversarially trained networks on analog hardware requires careful calibration between hardware non-idealities and $epsilon_train$ for optimum robustness and performance.
- Score: 6.506883928959601
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applications based on Deep Neural Networks (DNNs) have grown exponentially in
the past decade. To match their increasing computational needs, several
Non-Volatile Memory (NVM) crossbar based accelerators have been proposed.
Recently, researchers have shown that apart from improved energy efficiency and
performance, such approximate hardware also possess intrinsic robustness for
defense against adversarial attacks. Prior works quantified this intrinsic
robustness for vanilla DNNs trained on unperturbed inputs. However, adversarial
training of DNNs is the benchmark technique for robustness, and sole reliance
on intrinsic robustness of the hardware may not be sufficient. In this work, we
explore the design of robust DNNs through the amalgamation of adversarial
training and intrinsic robustness of NVM crossbar-based analog hardware. First,
we study the noise stability of such networks on unperturbed inputs and observe
that internal activations of adversarially trained networks have lower
Signal-to-Noise Ratio (SNR), and are sensitive to noise compared to vanilla
networks. As a result, they suffer on average 2x performance degradation due to
the approximate computations on analog hardware. Noise stability analyses show
the instability of adversarially trained DNNs. On the other hand, for
adversarial images generated using Square Black Box attacks, ResNet-10/20
adversarially trained on CIFAR-10/100 display a robustness gain of 20-30%. For
adversarial images generated using Projected-Gradient-Descent (PGD) White-Box
attacks, adversarially trained DNNs present a 5-10% gain in robust accuracy due
to underlying NVM crossbar when $\epsilon_{attack}$ is greater than
$\epsilon_{train}$. Our results indicate that implementing adversarially
trained networks on analog hardware requires careful calibration between
hardware non-idealities and $\epsilon_{train}$ for optimum robustness and
performance.
Related papers
- The Inherent Adversarial Robustness of Analog In-Memory Computing [2.435021773579434]
A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks.
In this paper, we experimentally validate a conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices.
Additional robustness is also observed when performing hardware-in-theloop attacks.
arXiv Detail & Related papers (2024-11-11T14:29:59Z) - XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural
Architectures for Non-ideal Xbars [2.222917681321253]
This work proposes a two-phase algorithm-hardware co-optimization approach called XploreNAS.
It searches for hardware-efficient & adversarially robust neural architectures for non-ideal crossbar platforms.
Experiments on crossbars with benchmark datasets show upto 8-16% improvement in the adversarial robustness of the searched Subnets.
arXiv Detail & Related papers (2023-02-15T16:44:18Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks [6.592909460916497]
We show that the non-ideal behavior of analog computing lowers the effectiveness of adversarial attacks.
In a non-adaptive attack, where the attacker is unaware of the analog hardware, we observe that analog computing offers a varying degree of intrinsic robustness.
arXiv Detail & Related papers (2020-08-27T09:36:50Z) - Rethinking Non-idealities in Memristive Crossbars for Adversarial
Robustness in Neural Networks [2.729253370269413]
Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks.
crossbar non-idealities have always been devalued since they cause errors in performing MVMs.
We show that the intrinsic hardware non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization.
arXiv Detail & Related papers (2020-08-25T22:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.