On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
- URL: http://arxiv.org/abs/2008.12016v2
- Date: Mon, 15 Mar 2021 19:48:27 GMT
- Title: On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
- Authors: Deboleena Roy, Indranil Chakraborty, Timur Ibrayev and Kaushik Roy
- Abstract summary: We show that the non-ideal behavior of analog computing lowers the effectiveness of adversarial attacks.
In a non-adaptive attack, where the attacker is unaware of the analog hardware, we observe that analog computing offers a varying degree of intrinsic robustness.
- Score: 6.592909460916497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing computational demand of Deep Learning has propelled research
in special-purpose inference accelerators based on emerging non-volatile memory
(NVM) technologies. Such NVM crossbars promise fast and energy-efficient
in-situ Matrix Vector Multiplication (MVM) thus alleviating the long-standing
von Neuman bottleneck in today's digital hardware. However, the analog nature
of computing in these crossbars is inherently approximate and results in
deviations from ideal output values, which reduces the overall performance of
Deep Neural Networks (DNNs) under normal circumstances. In this paper, we study
the impact of these non-idealities under adversarial circumstances. We show
that the non-ideal behavior of analog computing lowers the effectiveness of
adversarial attacks, in both Black-Box and White-Box attack scenarios. In a
non-adaptive attack, where the attacker is unaware of the analog hardware, we
observe that analog computing offers a varying degree of intrinsic robustness,
with a peak adversarial accuracy improvement of 35.34%, 22.69%, and 9.90% for
white box PGD (epsilon=1/255, iter=30) for CIFAR-10, CIFAR-100, and ImageNet
respectively. We also demonstrate "Hardware-in-Loop" adaptive attacks that
circumvent this robustness by utilizing the knowledge of the NVM model.
Related papers
- The Inherent Adversarial Robustness of Analog In-Memory Computing [2.435021773579434]
A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks.
In this paper, we experimentally validate a conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices.
Additional robustness is also observed when performing hardware-in-theloop attacks.
arXiv Detail & Related papers (2024-11-11T14:29:59Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - DNNShield: Dynamic Randomized Model Sparsification, A Defense Against
Adversarial Machine Learning [2.485182034310304]
We propose a hardware-accelerated defense against machine learning attacks.
DNNSHIELD adapts the strength of the response to the confidence of the adversarial input.
We show an adversarial detection rate of 86% when applied to VGG16 and 88% when applied to ResNet50.
arXiv Detail & Related papers (2022-07-31T19:29:44Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Interpolated Joint Space Adversarial Training for Robust and
Generalizable Defenses [82.3052187788609]
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.
Recent works show generalization improvement with adversarial samples under novel threat models.
We propose a novel threat model called Joint Space Threat Model (JSTM)
Under JSTM, we develop novel adversarial attacks and defenses.
arXiv Detail & Related papers (2021-12-12T21:08:14Z) - On the Noise Stability and Robustness of Adversarially Trained Networks
on NVM Crossbars [6.506883928959601]
We study the design of robust Deep Neural Networks (DNNs) through the amalgamation of adversarial training and intrinsic robustness of NVM crossbar-based analog hardware.
Our results indicate that implementing adversarially trained networks on analog hardware requires careful calibration between hardware non-idealities and $epsilon_train$ for optimum robustness and performance.
arXiv Detail & Related papers (2021-09-19T04:59:39Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Rethinking Non-idealities in Memristive Crossbars for Adversarial
Robustness in Neural Networks [2.729253370269413]
Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks.
crossbar non-idealities have always been devalued since they cause errors in performing MVMs.
We show that the intrinsic hardware non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization.
arXiv Detail & Related papers (2020-08-25T22:45:34Z) - Defensive Approximation: Securing CNNs using Approximate Computing [2.29450472676752]
We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios.
Our model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption.
arXiv Detail & Related papers (2020-06-13T18:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.