Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection
in AMI through Adversarial Attacks
- URL: http://arxiv.org/abs/2010.09212v1
- Date: Fri, 16 Oct 2020 02:25:40 GMT
- Title: Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection
in AMI through Adversarial Attacks
- Authors: Jiangnan Li, Yingyuan Yang, Jinyuan Stella Sun
- Abstract summary: We study the vulnerabilities of deep learning-based energy theft detection through adversarial attacks, including single-step attacks and iterative attacks.
The evaluation based on three types of neural networks shows that the adversarial attacker can report extremely low consumption measurements to the utility without being detected by the DL models.
- Score: 1.5791732557395552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective detection of energy theft can prevent revenue losses of utility
companies and is also important for smart grid security. In recent years,
enabled by the massive fine-grained smart meter data, deep learning (DL)
approaches are becoming popular in the literature to detect energy theft in the
advanced metering infrastructure (AMI). However, as neural networks are shown
to be vulnerable to adversarial examples, the security of the DL models is of
concern.
In this work, we study the vulnerabilities of DL-based energy theft detection
through adversarial attacks, including single-step attacks and iterative
attacks. From the attacker's point of view, we design the
\textit{SearchFromFree} framework that consists of 1) a randomly adversarial
measurement initialization approach to maximize the stolen profit and 2) a
step-size searching scheme to increase the performance of black-box iterative
attacks. The evaluation based on three types of neural networks shows that the
adversarial attacker can report extremely low consumption measurements to the
utility without being detected by the DL models. We finally discuss the
potential defense mechanisms against adversarial attacks in energy theft
detection.
Related papers
- Anomaly-based Framework for Detecting Power Overloading Cyberattacks in Smart Grid AMI [5.5672938329986845]
We propose a two-level anomaly detection framework based on regression decision trees.
The introduced detection approach leverages the regularity and predictability of energy consumption to build reference consumption patterns.
We carried out an extensive experiment on a real-world publicly available energy consumption dataset of 500 customers in Ireland.
arXiv Detail & Related papers (2024-07-03T16:52:23Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - SearchFromFree: Adversarial Measurements for Machine Learning-based
Energy Theft Detection [1.5791732557395552]
Energy theft causes large economic losses to utility companies around the world.
In this work, we demonstrate that the well-perform ML models for energy theft detection are highly vulnerable to adversarial attacks.
arXiv Detail & Related papers (2020-06-02T19:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.