Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural
Networks
- URL: http://arxiv.org/abs/2006.08020v3
- Date: Mon, 14 Sep 2020 20:04:40 GMT
- Title: Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural
Networks
- Authors: Sarada Krithivasan, Sanchari Sen, Anand Raghunathan
- Abstract summary: Adrial attacks have exposed serious vulnerabilities in Deep Neural Networks (DNNs)
We propose and demonstrate sparsity attacks, which adversarial modify a DNN's inputs so as to reduce sparsity in its internal activation values.
We launch both white-box and black-box versions of adversarial sparsity attacks and demonstrate that they decrease activation sparsity by up to 1.82x.
- Score: 3.9193443389004887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks have exposed serious vulnerabilities in Deep Neural
Networks (DNNs) through their ability to force misclassifications through
human-imperceptible perturbations to DNN inputs. We explore a new direction in
the field of adversarial attacks by suggesting attacks that aim to degrade the
computational efficiency of DNNs rather than their classification accuracy.
Specifically, we propose and demonstrate sparsity attacks, which adversarial
modify a DNN's inputs so as to reduce sparsity (or the presence of zero values)
in its internal activation values. In resource-constrained systems, a wide
range of hardware and software techniques have been proposed that exploit
sparsity to improve DNN efficiency. The proposed attack increases the execution
time and energy consumption of sparsity-optimized DNN implementations, raising
concern over their deployment in latency and energy-critical applications.
We propose a systematic methodology to generate adversarial inputs for
sparsity attacks by formulating an objective function that quantifies the
network's activation sparsity, and minimizing this function using iterative
gradient-descent techniques. We launch both white-box and black-box versions of
adversarial sparsity attacks on image recognition DNNs and demonstrate that
they decrease activation sparsity by up to 1.82x. We also evaluate the impact
of the attack on a sparsity-optimized DNN accelerator and demonstrate
degradations up to 1.59x in latency, and also study the performance of the
attack on a sparsity-optimized general-purpose processor. Finally, we evaluate
defense techniques such as activation thresholding and input quantization and
demonstrate that the proposed attack is able to withstand them, highlighting
the need for further efforts in this new direction within the field of
adversarial machine learning.
Related papers
- Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - DNNShield: Dynamic Randomized Model Sparsification, A Defense Against
Adversarial Machine Learning [2.485182034310304]
We propose a hardware-accelerated defense against machine learning attacks.
DNNSHIELD adapts the strength of the response to the confidence of the adversarial input.
We show an adversarial detection rate of 86% when applied to VGG16 and 88% when applied to ResNet50.
arXiv Detail & Related papers (2022-07-31T19:29:44Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - 2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both
Adversarial Robustness and Efficiency [26.920864182619844]
We propose a 2-in-1 Accelerator aiming at winning both the adversarial robustness and efficiency of DNN accelerators.
Specifically, we first propose a Random Precision Switch (RPS) algorithm that can effectively defend DNNs against adversarial attacks.
Furthermore, we propose a new precision-scalable accelerator featuring (1) a new precision-scalable unit architecture.
arXiv Detail & Related papers (2021-09-11T08:51:01Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks [7.20382137043754]
A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
arXiv Detail & Related papers (2020-08-03T21:55:41Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.