Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning
- URL: http://arxiv.org/abs/2505.06454v1
- Date: Fri, 09 May 2025 22:10:44 GMT
- Title: Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning
- Authors: Syed Mhamudul Hasan, Hussein Zangoti, Iraklis Anagnostopoulos, Abdur R. Shahid,
- Abstract summary: Recent studies have shown that sponge attacks can significantly increase the energy consumption and inference latency of deep neural networks (DNNs)<n>These attacks pose serious threats of energy depletion and latency degradation in systems where limited battery capacity and real-time responsiveness are critical.<n>We present the first systematic exploration of energy-latency sponge attacks targeting sensing-based AI models.<n>We also investigate model pruning, a widely adopted compression technique for resource-constrained AI, as a potential defense.
- Score: 0.44784055850794474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies have shown that sponge attacks can significantly increase the energy consumption and inference latency of deep neural networks (DNNs). However, prior work has focused primarily on computer vision and natural language processing tasks, overlooking the growing use of lightweight AI models in sensing-based applications on resource-constrained devices, such as those in Internet of Things (IoT) environments. These attacks pose serious threats of energy depletion and latency degradation in systems where limited battery capacity and real-time responsiveness are critical for reliable operation. This paper makes two key contributions. First, we present the first systematic exploration of energy-latency sponge attacks targeting sensing-based AI models. Using wearable sensing-based AI as a case study, we demonstrate that sponge attacks can substantially degrade performance by increasing energy consumption, leading to faster battery drain, and by prolonging inference latency. Second, to mitigate such attacks, we investigate model pruning, a widely adopted compression technique for resource-constrained AI, as a potential defense. Our experiments show that pruning-induced sparsity significantly improves model resilience against sponge poisoning. We also quantify the trade-offs between model efficiency and attack resilience, offering insights into the security implications of model compression in sensing-based AI systems deployed in IoT environments.
Related papers
- Energy-Latency Attacks: A New Adversarial Threat to Deep Learning [10.192836826455544]
This paper provides a comprehensive overview of current research on energy-latency attacks.<n>It focuses on the vulnerability of deep neural networks to this emerging attack paradigm, which can trigger denial-of-service attacks.<n>We explore different metrics used to measure the success of these attacks and provide an analysis and comparison of existing attack strategies.
arXiv Detail & Related papers (2025-03-06T20:50:58Z) - Spiking Meets Attention: Efficient Remote Sensing Image Super-Resolution with Attention Spiking Neural Networks [57.17129753411926]
Spiking neural networks (SNNs) are emerging as a promising alternative to traditional artificial neural networks (ANNs)<n>We propose SpikeSR, which achieves state-of-the-art performance across various remote sensing benchmarks such as AID, DOTA, and DIOR.
arXiv Detail & Related papers (2025-03-06T09:06:06Z) - The Impact of Uniform Inputs on Activation Sparsity and Energy-Latency Attacks in Computer Vision [4.45482419850721]
Researchers have recently demonstrated that attackers can compute and submit so-called sponge examples at inference time to increase the energy consumption and decision latency of neural networks.
In computer vision, the proposed strategy crafts inputs with less activation sparsity which could otherwise be used to accelerate the computation.
A uniform image, that is, an image with mostly flat, uniformly colored surfaces, triggers more activations due to a specific interplay of convolution, batch normalization, and ReLU activation.
arXiv Detail & Related papers (2024-03-27T14:11:23Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Green Edge AI: A Contemporary Survey [46.11332733210337]
The transformative power of AI is derived from the utilization of deep neural networks (DNNs)
Deep learning (DL) is increasingly being transitioned to wireless edge networks in proximity to end-user devices (EUDs)
Despite its potential, edge AI faces substantial challenges, mostly due to the dichotomy between the resource limitations of wireless edge networks and the resource-intensive nature of DL.
arXiv Detail & Related papers (2023-12-01T04:04:37Z) - Causal Reasoning: Charting a Revolutionary Course for Next-Generation
AI-Native Wireless Networks [63.246437631458356]
Next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native.
This article introduces a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning.
We highlight several wireless networking challenges that can be addressed by causal discovery and representation.
arXiv Detail & Related papers (2023-09-23T00:05:39Z) - Energy-Latency Attacks to On-Device Neural Networks via Sponge Poisoning [5.346606291026528]
We present an on-device sponge poisoning attack pipeline to simulate the streaming and consistent inference scenario.
Our exclusive experimental analysis with processors and on-device networks shows that sponge poisoning attacks can effectively pollute the modern processor.
We highlight the need for improved defense mechanisms to prevent such attacks on on-device deep learning applications.
arXiv Detail & Related papers (2023-05-06T01:20:30Z) - Energy-Latency Attacks via Sponge Poisoning [21.06661263745426]
Sponge examples are test-time inputs optimized to increase energy consumption and prediction latency of deep networks deployed on hardware accelerators.<n>We present a novel training-time attack, named sponge poisoning, which aims to worsen energy consumption and prediction latency of neural networks on any test input without affecting classification accuracy.
arXiv Detail & Related papers (2022-03-14T17:18:10Z) - Deep Reinforcement Learning with Spiking Q-learning [51.386945803485084]
spiking neural networks (SNNs) are expected to realize artificial intelligence (AI) with less energy consumption.
It provides a promising energy-efficient way for realistic control tasks by combining SNNs with deep reinforcement learning (RL)
arXiv Detail & Related papers (2022-01-21T16:42:11Z) - A Practical Adversarial Attack on Contingency Detection of Smart Energy
Systems [0.0]
We propose an innovative adversarial attack model that can practically compromise dynamical controls of energy system.
We also optimize the deployment of the proposed adversarial attack model by employing deep reinforcement learning (RL) techniques.
arXiv Detail & Related papers (2021-09-13T23:11:56Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.