Energy-Latency Attacks via Sponge Poisoning
- URL: http://arxiv.org/abs/2203.08147v4
- Date: Tue, 28 Mar 2023 08:09:38 GMT
- Title: Energy-Latency Attacks via Sponge Poisoning
- Authors: Antonio Emanuele Cin\`a, Ambra Demontis, Battista Biggio, Fabio Roli,
Marcello Pelillo
- Abstract summary: We are the first to demonstrate that sponge examples can also be injected at training time, via an attack that we call sponge poisoning.
This attack allows one to increase the energy consumption and latency of machine-learning models indiscriminately on each test-time input.
- Score: 29.779696446182374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sponge examples are test-time inputs carefully optimized to increase energy
consumption and latency of neural networks when deployed on hardware
accelerators. In this work, we are the first to demonstrate that sponge
examples can also be injected at training time, via an attack that we call
sponge poisoning. This attack allows one to increase the energy consumption and
latency of machine-learning models indiscriminately on each test-time input. We
present a novel formalization for sponge poisoning, overcoming the limitations
related to the optimization of test-time sponge examples, and show that this
attack is possible even if the attacker only controls a few model updates; for
instance, if model training is outsourced to an untrusted third-party or
distributed via federated learning. Our extensive experimental analysis shows
that sponge poisoning can almost completely vanish the effect of hardware
accelerators. We also analyze the activations of poisoned models, identifying
which components are more vulnerable to this attack. Finally, we examine the
feasibility of countermeasures against sponge poisoning to decrease energy
consumption, showing that sanitization methods may be overly expensive for most
of the users.
Related papers
- The Impact of Uniform Inputs on Activation Sparsity and Energy-Latency Attacks in Computer Vision [4.45482419850721]
Researchers have recently demonstrated that attackers can compute and submit so-called sponge examples at inference time to increase the energy consumption and decision latency of neural networks.
In computer vision, the proposed strategy crafts inputs with less activation sparsity which could otherwise be used to accelerate the computation.
A uniform image, that is, an image with mostly flat, uniformly colored surfaces, triggers more activations due to a specific interplay of convolution, batch normalization, and ReLU activation.
arXiv Detail & Related papers (2024-03-27T14:11:23Z) - Diffusion Denoising as a Certified Defense against Clean-label Poisoning [56.04951180983087]
We show how an off-the-shelf diffusion model can sanitize the tampered training data.
We extensively test our defense against seven clean-label poisoning attacks and reduce their attack success to 0-16% with only a negligible drop in the test time accuracy.
arXiv Detail & Related papers (2024-03-18T17:17:07Z) - The SkipSponge Attack: Sponge Weight Poisoning of Deep Neural Networks [12.019190819782525]
Sponge attacks aim to increase the energy consumption and computation time of neural networks deployed on hardware accelerators.
In this work, we propose a novel sponge attack called SkipSponge.
SkipSponge is the first sponge attack that is performed directly on the parameters of a pre-trained model using only a few data samples.
arXiv Detail & Related papers (2024-02-09T12:07:06Z) - On Practical Aspects of Aggregation Defenses against Data Poisoning
Attacks [58.718697580177356]
Attacks on deep learning models with malicious training samples are known as data poisoning.
Recent advances in defense strategies against data poisoning have highlighted the effectiveness of aggregation schemes in achieving certified poisoning robustness.
Here we focus on Deep Partition Aggregation, a representative aggregation defense, and assess its practical aspects, including efficiency, performance, and robustness.
arXiv Detail & Related papers (2023-06-28T17:59:35Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Energy-Latency Attacks to On-Device Neural Networks via Sponge Poisoning [5.346606291026528]
We present an on-device sponge poisoning attack pipeline to simulate the streaming and consistent inference scenario.
Our exclusive experimental analysis with processors and on-device networks shows that sponge poisoning attacks can effectively pollute the modern processor.
We highlight the need for improved defense mechanisms to prevent such attacks on on-device deep learning applications.
arXiv Detail & Related papers (2023-05-06T01:20:30Z) - Temporal Robustness against Data Poisoning [69.01705108817785]
Data poisoning considers cases when an adversary manipulates the behavior of machine learning algorithms through malicious training data.
We propose a temporal threat model of data poisoning with two novel metrics, earliness and duration, which respectively measure how long an attack started in advance and how long an attack lasted.
arXiv Detail & Related papers (2023-02-07T18:59:19Z) - Amplifying Membership Exposure via Data Poisoning [18.799570863203858]
In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples.
We propose a set of data poisoning attacks to amplify the membership exposure of the targeted class.
Our results show that the proposed attacks can substantially increase the membership inference precision with minimum overall test-time model performance degradation.
arXiv Detail & Related papers (2022-11-01T13:52:25Z) - Indiscriminate Poisoning Attacks Are Shortcuts [77.38947817228656]
We find that the perturbations of advanced poisoning attacks are almost textbflinear separable when assigned with the target labels of the corresponding samples.
We show that such synthetic perturbations are as powerful as the deliberately crafted attacks.
Our finding suggests that the emphshortcut learning problem is more serious than previously believed.
arXiv Detail & Related papers (2021-11-01T12:44:26Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.