Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory
Architectures to Adversarial Attacks in Deep Neural Networks
- URL: http://arxiv.org/abs/2011.13392v1
- Date: Thu, 26 Nov 2020 17:08:06 GMT
- Title: Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory
Architectures to Adversarial Attacks in Deep Neural Networks
- Authors: Abhishek Moitra and Priyadarshini Panda
- Abstract summary: We show that bit-error noise in hybrid memories due to erroneous 6T-SRAM cells have deterministic behaviour based on the hybrid memory configurations.
We propose a methodology to select appropriate layers and their corresponding hybrid memory configurations to introduce the required surgical noise.
We achieve 2-8% higher adversarial accuracy without re-training against white-box attacks.
- Score: 2.729253370269413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning is able to solve a plethora of once impossible problems.
However, they are vulnerable to input adversarial attacks preventing them from
being autonomously deployed in critical applications. Several
algorithm-centered works have discussed methods to cause adversarial attacks
and improve adversarial robustness of a Deep Neural Network (DNN). In this
work, we elicit the advantages and vulnerabilities of hybrid 6T-8T memories to
improve the adversarial robustness and cause adversarial attacks on DNNs. We
show that bit-error noise in hybrid memories due to erroneous 6T-SRAM cells
have deterministic behaviour based on the hybrid memory configurations (V_DD,
8T-6T ratio). This controlled noise (surgical noise) can be strategically
introduced into specific DNN layers to improve the adversarial accuracy of
DNNs. At the same time, surgical noise can be carefully injected into the DNN
parameters stored in hybrid memory to cause adversarial attacks. To improve the
adversarial robustness of DNNs using surgical noise, we propose a methodology
to select appropriate DNN layers and their corresponding hybrid memory
configurations to introduce the required surgical noise. Using this, we achieve
2-8% higher adversarial accuracy without re-training against white-box attacks
like FGSM, than the baseline models (with no surgical noise introduced). To
demonstrate adversarial attacks using surgical noise, we design a novel,
white-box attack on DNN parameters stored in hybrid memory banks that causes
the DNN inference accuracy to drop by more than 60% with over 90% confidence
value. We support our claims with experiments, performed using benchmark
datasets-CIFAR10 and CIFAR100 on VGG19 and ResNet18 networks.
Related papers
- VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise [0.5755004576310334]
We introduce a novel noise-reduction procedure, Vector Quantization U-Net (VQUNet), to reduce adversarial noise and reconstruct data with high fidelity.
VQUNet features a discrete latent representation learning through a multi-scale hierarchical structure for both noise reduction and data reconstruction.
It outperforms other state-of-the-art noise-reduction-based defense methods under various adversarial attacks for both Fashion-MNIST and CIFAR10 datasets.
arXiv Detail & Related papers (2024-06-05T10:10:03Z) - Improving Robustness Against Adversarial Attacks with Deeply Quantized
Neural Networks [0.5849513679510833]
A disadvantage of Deep Neural Networks (DNNs) is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs.
This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework.
arXiv Detail & Related papers (2023-04-25T13:56:35Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - A Mask-Based Adversarial Defense Scheme [3.759725391906588]
Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs)
We propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
arXiv Detail & Related papers (2022-04-21T12:55:27Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Efficiency-driven Hardware Optimization for Adversarially Robust Neural
Networks [3.125321230840342]
We will focus on how to address adversarial robustness for Deep Neural Networks (DNNs) through efficiency-driven hardware optimizations.
One such approach is approximate digital CMOS memories with hybrid 6T-8T cells that enable supply scaling (Vdd) yielding low-power operation.
Another memory optimization approach involves the creation of memristive crossbars that perform Matrix-Multiplications (MVMs) efficiently with low energy and area requirements.
arXiv Detail & Related papers (2021-05-09T19:26:25Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of
Neural Networks [3.2242513084255036]
QUANOS is a framework that performs layer-specific hybrid quantization based on Adversarial Noise Sensitivity (ANS)
Our experiments on CIFAR10, CIFAR100 datasets show that QUANOS outperforms homogenously quantized 8-bit precision baseline in terms of adversarial robustness.
arXiv Detail & Related papers (2020-04-22T15:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.