Sponge Examples: Energy-Latency Attacks on Neural Networks
- URL: http://arxiv.org/abs/2006.03463v2
- Date: Wed, 12 May 2021 14:17:37 GMT
- Title: Sponge Examples: Energy-Latency Attacks on Neural Networks
- Authors: Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert
Mullins, Ross Anderson
- Abstract summary: We introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical.
We mount two variants of this attack on established vision and language models, increasing energy consumption by a factor of 10 to 200.
Our attacks can also be used to delay decisions where a network has critical real-time performance, such as in perception for autonomous vehicles.
- Score: 27.797657094947017
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The high energy costs of neural network training and inference led to the use
of acceleration hardware such as GPUs and TPUs. While this enabled us to train
large-scale neural networks in datacenters and deploy them on edge devices, the
focus so far is on average-case performance. In this work, we introduce a novel
threat vector against neural networks whose energy consumption or decision
latency are critical. We show how adversaries can exploit carefully crafted
$\boldsymbol{sponge}~\boldsymbol{examples}$, which are inputs designed to
maximise energy consumption and latency.
We mount two variants of this attack on established vision and language
models, increasing energy consumption by a factor of 10 to 200. Our attacks can
also be used to delay decisions where a network has critical real-time
performance, such as in perception for autonomous vehicles. We demonstrate the
portability of our malicious inputs across CPUs and a variety of hardware
accelerator chips including GPUs, and an ASIC simulator. We conclude by
proposing a defense strategy which mitigates our attack by shifting the
analysis of energy consumption in hardware from an average-case to a worst-case
perspective.
Related papers
- Anomaly-based Framework for Detecting Power Overloading Cyberattacks in Smart Grid AMI [5.5672938329986845]
We propose a two-level anomaly detection framework based on regression decision trees.
The introduced detection approach leverages the regularity and predictability of energy consumption to build reference consumption patterns.
We carried out an extensive experiment on a real-world publicly available energy consumption dataset of 500 customers in Ireland.
arXiv Detail & Related papers (2024-07-03T16:52:23Z) - Low-power event-based face detection with asynchronous neuromorphic
hardware [2.0774873363739985]
We present the first instance of an on-chip spiking neural network for event-based face detection deployed on the SynSense Speck neuromorphic chip.
We show how to reduce precision discrepancies between off-chip clock-driven simulation used for training and on-chip event-driven inference.
We achieve an on-chip face detection mAP[0.5] of 0.6 while consuming only 20 mW.
arXiv Detail & Related papers (2023-12-21T19:23:02Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural
Networks with Power Consumption Information [0.0]
Adversarial attacks on state-of-the-art machine learning models pose a significant threat to the safety and security of mission-critical autonomous systems.
This paper considers the additional vulnerability of machine learning models when attackers can measure the power consumption of their underlying hardware platform.
arXiv Detail & Related papers (2022-07-06T15:56:30Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks [7.20382137043754]
A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
arXiv Detail & Related papers (2020-08-03T21:55:41Z) - Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural
Networks [3.9193443389004887]
Adrial attacks have exposed serious vulnerabilities in Deep Neural Networks (DNNs)
We propose and demonstrate sparsity attacks, which adversarial modify a DNN's inputs so as to reduce sparsity in its internal activation values.
We launch both white-box and black-box versions of adversarial sparsity attacks and demonstrate that they decrease activation sparsity by up to 1.82x.
arXiv Detail & Related papers (2020-06-14T21:02:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.