The Inherent Adversarial Robustness of Analog In-Memory Computing
- URL: http://arxiv.org/abs/2411.07023v1
- Date: Mon, 11 Nov 2024 14:29:59 GMT
- Title: The Inherent Adversarial Robustness of Analog In-Memory Computing
- Authors: Corey Lammie, Julian Büchel, Athanasios Vasilopoulos, Manuel Le Gallo, Abu Sebastian,
- Abstract summary: A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks.
In this paper, we experimentally validate a conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices.
Additional robustness is also observed when performing hardware-in-theloop attacks.
- Score: 2.435021773579434
- License:
- Abstract: A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks. Inherently non-deterministic compute substrates, such as those based on Analog In-Memory Computing (AIMC), have been speculated to provide significant adversarial robustness when performing DNN inference. In this paper, we experimentally validate this conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices. We demonstrate higher adversarial robustness against different types of adversarial attacks when implementing an image classification network. Additional robustness is also observed when performing hardware-in-the-loop attacks, for which the attacker is assumed to have full access to the hardware. A careful study of the various noise sources indicate that a combination of stochastic noise sources (both recurrent and non-recurrent) are responsible for the adversarial robustness and that their type and magnitude disproportionately effects this property. Finally, it is demonstrated, via simulations, that when a much larger transformer network is used to implement a Natural Language Processing (NLP) task, additional robustness is still observed.
Related papers
- A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - On the Noise Stability and Robustness of Adversarially Trained Networks
on NVM Crossbars [6.506883928959601]
We study the design of robust Deep Neural Networks (DNNs) through the amalgamation of adversarial training and intrinsic robustness of NVM crossbar-based analog hardware.
Our results indicate that implementing adversarially trained networks on analog hardware requires careful calibration between hardware non-idealities and $epsilon_train$ for optimum robustness and performance.
arXiv Detail & Related papers (2021-09-19T04:59:39Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Evaluating the Robustness of Bayesian Neural Networks Against Different
Types of Attacks [2.599882743586164]
We show that a Bayesian neural network achieves significantly higher robustness against adversarial attacks generated against a deterministic neural network model.
The posterior can act as the safety precursor of ongoing malicious activities.
This advises on utilizing layers in building decision-making pipelines within a safety-critical domain.
arXiv Detail & Related papers (2021-06-17T03:18:59Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection
in Neural Networks [3.125321230840342]
Adversarial examples are inputs that have been carefully perturbed to fool classifier networks, while appearing unchanged to humans.
We propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.
We show that our method improves state-of-the-art detector robustness against adversarial examples.
arXiv Detail & Related papers (2021-01-05T14:31:53Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks [6.592909460916497]
We show that the non-ideal behavior of analog computing lowers the effectiveness of adversarial attacks.
In a non-adaptive attack, where the attacker is unaware of the analog hardware, we observe that analog computing offers a varying degree of intrinsic robustness.
arXiv Detail & Related papers (2020-08-27T09:36:50Z) - QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of
Neural Networks [3.2242513084255036]
QUANOS is a framework that performs layer-specific hybrid quantization based on Adversarial Noise Sensitivity (ANS)
Our experiments on CIFAR10, CIFAR100 datasets show that QUANOS outperforms homogenously quantized 8-bit precision baseline in terms of adversarial robustness.
arXiv Detail & Related papers (2020-04-22T15:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.