On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks
- URL: http://arxiv.org/abs/2307.10209v1
- Date: Fri, 14 Jul 2023 13:10:01 GMT
- Title: On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks
- Authors: Hafsa Bousbiat, Yassine Himeur, Abbes Amira, Wathiq Mansoor
- Abstract summary: Adversarial attacks have proven to be a significant threat in domains such as computer vision and speech recognition.
We investigate the Fast Gradient Sign Method (FGSM) to perturb the input sequences fed into two commonly employed CNN-based NILM baselines.
Our findings provide compelling evidence for the vulnerability of these models, particularly the S2P model which exhibits an average decline of 20% in the F1-score.
- Score: 2.389598109913753
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as load
disaggregation algorithms, are fundamental tools for effective energy
management. Despite the success of deep models in load disaggregation, they
face various challenges, particularly those pertaining to privacy and security.
This paper investigates the sensitivity of prominent deep NILM baselines to
adversarial attacks, which have proven to be a significant threat in domains
such as computer vision and speech recognition. Adversarial attacks entail the
introduction of imperceptible noise into the input data with the aim of
misleading the neural network into generating erroneous outputs. We investigate
the Fast Gradient Sign Method (FGSM), a well-known adversarial attack, to
perturb the input sequences fed into two commonly employed CNN-based NILM
baselines: the Sequence-to-Sequence (S2S) and Sequence-to-Point (S2P) models.
Our findings provide compelling evidence for the vulnerability of these models,
particularly the S2P model which exhibits an average decline of 20\% in the
F1-score even with small amounts of noise. Such weakness has the potential to
generate profound implications for energy management systems in residential and
industrial sectors reliant on NILM models.
Related papers
- The Inherent Adversarial Robustness of Analog In-Memory Computing [2.435021773579434]
A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks.
In this paper, we experimentally validate a conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices.
Additional robustness is also observed when performing hardware-in-theloop attacks.
arXiv Detail & Related papers (2024-11-11T14:29:59Z) - Exploring the Vulnerabilities of Machine Learning and Quantum Machine
Learning to Adversarial Attacks using a Malware Dataset: A Comparative
Analysis [0.0]
Machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems.
Their susceptibility to adversarial attacks raises concerns when deploying these systems in security sensitive applications.
We present a comparative analysis of the vulnerability of ML and QNN models to adversarial attacks using a malware dataset.
arXiv Detail & Related papers (2023-05-31T06:31:42Z) - Adversarial Robustness Assessment of NeuroEvolution Approaches [1.237556184089774]
We evaluate the robustness of models found by two NeuroEvolution approaches on the CIFAR-10 image classification task.
Our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero.
Some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness.
arXiv Detail & Related papers (2022-07-12T10:40:19Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Crafting Adversarial Examples for Deep Learning Based Prognostics
(Extended Version) [0.0]
State-of-the-art Prognostics and Health Management (PHM) systems incorporate Deep Learning (DL) algorithms and Internet of Things (IoT) devices.
In this paper, we adopt the adversarial example crafting techniques from the computer vision domain and apply them to the PHM domain.
We evaluate the impact of adversarial attacks using NASA's turbofan engine dataset.
arXiv Detail & Related papers (2020-09-21T19:43:38Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z) - On the Matrix-Free Generation of Adversarial Perturbations for Black-Box
Attacks [1.199955563466263]
In this paper, we propose a practical generation method of such adversarial perturbation to be applied to black-box attacks.
The attackers generate such perturbation without invoking inner functions and/or accessing the inner states of a deep neural network.
arXiv Detail & Related papers (2020-02-18T00:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.