Energy-Latency Attacks to On-Device Neural Networks via Sponge Poisoning
- URL: http://arxiv.org/abs/2305.03888v2
- Date: Thu, 11 May 2023 09:31:06 GMT
- Title: Energy-Latency Attacks to On-Device Neural Networks via Sponge Poisoning
- Authors: Zijian Wang, Shuo Huang, Yujin Huang, Helei Cui
- Abstract summary: We present an on-device sponge poisoning attack pipeline to simulate the streaming and consistent inference scenario.
Our exclusive experimental analysis with processors and on-device networks shows that sponge poisoning attacks can effectively pollute the modern processor.
We highlight the need for improved defense mechanisms to prevent such attacks on on-device deep learning applications.
- Score: 5.346606291026528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, on-device deep learning has gained attention as a means of
developing affordable deep learning applications for mobile devices. However,
on-device models are constrained by limited energy and computation resources.
In the mean time, a poisoning attack known as sponge poisoning has been
developed.This attack involves feeding the model with poisoned examples to
increase the energy consumption during inference. As previous work is focusing
on server hardware accelerators, in this work, we extend the sponge poisoning
attack to an on-device scenario to evaluate the vulnerability of mobile device
processors. We present an on-device sponge poisoning attack pipeline to
simulate the streaming and consistent inference scenario to bridge the
knowledge gap in the on-device setting. Our exclusive experimental analysis
with processors and on-device networks shows that sponge poisoning attacks can
effectively pollute the modern processor with its built-in accelerator. We
analyze the impact of different factors in the sponge poisoning algorithm and
highlight the need for improved defense mechanisms to prevent such attacks on
on-device deep learning applications.
Related papers
- Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning [0.44784055850794474]
Recent studies have shown that sponge attacks can significantly increase the energy consumption and inference latency of deep neural networks (DNNs)<n>These attacks pose serious threats of energy depletion and latency degradation in systems where limited battery capacity and real-time responsiveness are critical.<n>We present the first systematic exploration of energy-latency sponge attacks targeting sensing-based AI models.<n>We also investigate model pruning, a widely adopted compression technique for resource-constrained AI, as a potential defense.
arXiv Detail & Related papers (2025-05-09T22:10:44Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Overload: Latency Attacks on Object Detection for Edge Devices [47.9744734181236]
This paper investigates latency attacks on deep learning applications.
Unlike common adversarial attacks for misclassification, the goal of latency attacks is to increase the inference time.
We use object detection to demonstrate how such kind of attacks work.
arXiv Detail & Related papers (2023-04-11T17:24:31Z) - A Human-in-the-Middle Attack against Object Detection Systems [4.764637544913963]
We propose a novel hardware attack inspired by Man-in-the-Middle attacks in cryptography.
This attack generates a Universal Adversarial Perturbations (UAP) and injects the perturbation between the USB camera and the detection system.
These findings raise serious concerns for applications of deep learning models in safety-critical systems, such as autonomous driving.
arXiv Detail & Related papers (2022-08-15T13:21:41Z) - Indiscriminate Data Poisoning Attacks on Neural Networks [28.09519873656809]
Data poisoning attacks aim to influence a model by injecting "poisoned" data into the training process.
We take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games.
We present efficient implementations that exploit modern auto-differentiation packages and allow simultaneous and coordinated generation of poisoned points.
arXiv Detail & Related papers (2022-04-19T18:57:26Z) - Energy-Latency Attacks via Sponge Poisoning [29.779696446182374]
We are the first to demonstrate that sponge examples can also be injected at training time, via an attack that we call sponge poisoning.
This attack allows one to increase the energy consumption and latency of machine-learning models indiscriminately on each test-time input.
arXiv Detail & Related papers (2022-03-14T17:18:10Z) - SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
Sparsification [24.053704318868043]
In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks by uploading "poisoned" updates.
We introduce algoname, a novel defense that uses global top-k update sparsification and device-level clipping gradient to mitigate model poisoning attacks.
arXiv Detail & Related papers (2021-12-12T16:34:52Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Sponge Examples: Energy-Latency Attacks on Neural Networks [27.797657094947017]
We introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical.
We mount two variants of this attack on established vision and language models, increasing energy consumption by a factor of 10 to 200.
Our attacks can also be used to delay decisions where a network has critical real-time performance, such as in perception for autonomous vehicles.
arXiv Detail & Related papers (2020-06-05T14:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.