On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks
- URL: http://arxiv.org/abs/2307.10209v1
- Date: Fri, 14 Jul 2023 13:10:01 GMT
- Title: On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks
- Authors: Hafsa Bousbiat, Yassine Himeur, Abbes Amira, Wathiq Mansoor
- Abstract summary: Adversarial attacks have proven to be a significant threat in domains such as computer vision and speech recognition.
We investigate the Fast Gradient Sign Method (FGSM) to perturb the input sequences fed into two commonly employed CNN-based NILM baselines.
Our findings provide compelling evidence for the vulnerability of these models, particularly the S2P model which exhibits an average decline of 20% in the F1-score.
- Score: 2.389598109913753
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as load
disaggregation algorithms, are fundamental tools for effective energy
management. Despite the success of deep models in load disaggregation, they
face various challenges, particularly those pertaining to privacy and security.
This paper investigates the sensitivity of prominent deep NILM baselines to
adversarial attacks, which have proven to be a significant threat in domains
such as computer vision and speech recognition. Adversarial attacks entail the
introduction of imperceptible noise into the input data with the aim of
misleading the neural network into generating erroneous outputs. We investigate
the Fast Gradient Sign Method (FGSM), a well-known adversarial attack, to
perturb the input sequences fed into two commonly employed CNN-based NILM
baselines: the Sequence-to-Sequence (S2S) and Sequence-to-Point (S2P) models.
Our findings provide compelling evidence for the vulnerability of these models,
particularly the S2P model which exhibits an average decline of 20\% in the
F1-score even with small amounts of noise. Such weakness has the potential to
generate profound implications for energy management systems in residential and
industrial sectors reliant on NILM models.
Related papers
- Do Spikes Protect Privacy? Investigating Black-Box Model Inversion Attacks in Spiking Neural Networks [0.0]
This work presents the first study of black-box Model Inversion (MI) attacks on Spiking Neural Networks (SNNs)
We adapt a generative adversarial MI framework to the spiking domain by incorporating rate-based encoding for input transformation and decoding mechanisms for output interpretation.
Our results show that SNNs exhibit significantly greater resistance to MI attacks than ANNs, as demonstrated by degraded reconstructions, increased instability in attack convergence, and overall reduced attack effectiveness across multiple evaluation metrics.
arXiv Detail & Related papers (2025-02-08T10:02:27Z) - Preventing Non-intrusive Load Monitoring Privacy Invasion: A Precise Adversarial Attack Scheme for Networked Smart Meters [99.90150979732641]
We propose an innovative scheme based on adversarial attack in this paper.
The scheme effectively prevents NILM models from violating appliance-level privacy, while also ensuring accurate billing calculation for users.
Our solutions exhibit transferability, making the generated perturbation signal from one target model applicable to other diverse NILM models.
arXiv Detail & Related papers (2024-12-22T07:06:46Z) - The Inherent Adversarial Robustness of Analog In-Memory Computing [2.435021773579434]
A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks.
In this paper, we experimentally validate a conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices.
Additional robustness is also observed when performing hardware-in-theloop attacks.
arXiv Detail & Related papers (2024-11-11T14:29:59Z) - Adversarial Robustness Assessment of NeuroEvolution Approaches [1.237556184089774]
We evaluate the robustness of models found by two NeuroEvolution approaches on the CIFAR-10 image classification task.
Our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero.
Some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness.
arXiv Detail & Related papers (2022-07-12T10:40:19Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Crafting Adversarial Examples for Deep Learning Based Prognostics
(Extended Version) [0.0]
State-of-the-art Prognostics and Health Management (PHM) systems incorporate Deep Learning (DL) algorithms and Internet of Things (IoT) devices.
In this paper, we adopt the adversarial example crafting techniques from the computer vision domain and apply them to the PHM domain.
We evaluate the impact of adversarial attacks using NASA's turbofan engine dataset.
arXiv Detail & Related papers (2020-09-21T19:43:38Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.