SearchFromFree: Adversarial Measurements for Machine Learning-based
Energy Theft Detection
- URL: http://arxiv.org/abs/2006.03504v2
- Date: Sun, 30 Aug 2020 19:30:35 GMT
- Title: SearchFromFree: Adversarial Measurements for Machine Learning-based
Energy Theft Detection
- Authors: Jiangnan Li, Yingyuan Yang, Jinyuan Stella Sun
- Abstract summary: Energy theft causes large economic losses to utility companies around the world.
In this work, we demonstrate that the well-perform ML models for energy theft detection are highly vulnerable to adversarial attacks.
- Score: 1.5791732557395552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Energy theft causes large economic losses to utility companies around the
world. In recent years, energy theft detection approaches based on machine
learning (ML) techniques, especially neural networks, become popular in the
research literature and achieve state-of-the-art detection performance.
However, in this work, we demonstrate that the well-perform ML models for
energy theft detection are highly vulnerable to adversarial attacks. In
particular, we design an adversarial measurement generation algorithm that
enables the attacker to report extremely low power consumption measurements to
the utilities while bypassing the ML energy theft detection. We evaluate our
approach with three kinds of neural networks based on a real-world smart meter
dataset. The evaluation result demonstrates that our approach can significantly
decrease the ML models' detection accuracy, even for black-box attackers.
Related papers
- Anomaly-based Framework for Detecting Power Overloading Cyberattacks in Smart Grid AMI [5.5672938329986845]
We propose a two-level anomaly detection framework based on regression decision trees.
The introduced detection approach leverages the regularity and predictability of energy consumption to build reference consumption patterns.
We carried out an extensive experiment on a real-world publicly available energy consumption dataset of 500 customers in Ireland.
arXiv Detail & Related papers (2024-07-03T16:52:23Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Machine-learned Adversarial Attacks against Fault Prediction Systems in
Smart Electrical Grids [17.268321134222667]
This study investigates the challenges associated with the security of machine learning (ML) applications in the smart grid scenario.
We demonstrate first that the deep neural network method used in the smart grid is susceptible to adversarial perturbation.
Then, we highlight how studies on fault localization and type classification illustrate the weaknesses of present ML algorithms in smart grids to various adversarial attacks.
arXiv Detail & Related papers (2023-03-28T10:19:03Z) - EnsembleNTLDetect: An Intelligent Framework for Electricity Theft
Detection in Smart Grid [0.0]
We present EnsembleNTLDetect, a robust and scalable electricity theft detection framework.
It employs a set of efficient data pre-processing techniques and machine learning models to accurately detect electricity theft.
A Conditional Generative Adversarial Network (CTGAN) is used to augment the dataset to ensure robust training.
arXiv Detail & Related papers (2021-10-09T08:19:03Z) - Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning [0.0]
Recently published attacks against deep neural networks (DNNs) have stressed the importance of methodologies and tools to assess the security risks of using this technology in critical systems.
We propose a new technique for defending deep neural network classifiers, and convolutional ones in particular.
Our defense is cheap in the sense that it requires less power despite a small cost to pay in terms of detection accuracy.
arXiv Detail & Related papers (2021-08-31T04:56:58Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Energy Drain of the Object Detection Processing Pipeline for Mobile
Devices: Analysis and Implications [77.00418462388525]
This paper presents the first detailed experimental study of a mobile augmented reality (AR) client's energy consumption and the detection latency of executing Convolutional Neural Networks (CNN) based object detection.
Our detailed measurements refine the energy analysis of mobile AR clients and reveal several interesting perspectives regarding the energy consumption of executing CNN-based object detection.
arXiv Detail & Related papers (2020-11-26T00:32:07Z) - Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection
in AMI through Adversarial Attacks [1.5791732557395552]
We study the vulnerabilities of deep learning-based energy theft detection through adversarial attacks, including single-step attacks and iterative attacks.
The evaluation based on three types of neural networks shows that the adversarial attacker can report extremely low consumption measurements to the utility without being detected by the DL models.
arXiv Detail & Related papers (2020-10-16T02:25:40Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.