Physics-Constrained Backdoor Attacks on Power System Fault Localization
- URL: http://arxiv.org/abs/2211.04445v1
- Date: Mon, 7 Nov 2022 12:57:26 GMT
- Title: Physics-Constrained Backdoor Attacks on Power System Fault Localization
- Authors: Jianing Bai, Ren Wang, Zuyi Li
- Abstract summary: This work proposes a novel physics-constrained backdoor poisoning attack.
It embeds the undetectable attack signal into the learned model and only performs the attack when it encounters the corresponding signal.
The proposed attack pipeline can be easily generalized to other power system tasks.
- Score: 1.1683938179815823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advances in deep learning (DL) techniques have the potential to deliver
transformative technological breakthroughs to numerous complex tasks in modern
power systems that suffer from increasing uncertainty and nonlinearity.
However, the vulnerability of DL has yet to be thoroughly explored in power
system tasks under various physical constraints. This work, for the first time,
proposes a novel physics-constrained backdoor poisoning attack, which embeds
the undetectable attack signal into the learned model and only performs the
attack when it encounters the corresponding signal. The paper illustrates the
proposed attack on the real-time fault line localization application.
Furthermore, the simulation results on the 68-bus power system demonstrate that
DL-based fault line localization methods are not robust to our proposed attack,
indicating that backdoor poisoning attacks pose real threats to DL
implementations in power systems. The proposed attack pipeline can be easily
generalized to other power system tasks.
Related papers
- FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Investigation of Multi-stage Attack and Defense Simulation for Data Synthesis [2.479074862022315]
This study proposes a model for generating synthetic data of multi-stage cyber attacks in the power grid.
It uses attack trees to model the attacker's sequence of steps and a game-theoretic approach to incorporate the defender's actions.
arXiv Detail & Related papers (2023-12-21T09:54:18Z) - Physics-Informed Convolutional Autoencoder for Cyber Anomaly Detection
in Power Distribution Grids [0.0]
This paper proposes a physics-informed convolutional autoencoder (PIConvAE) to detect stealthy cyber-attacks in power distribution grids.
The proposed model integrates the physical principles into the loss function of the neural network by applying Kirchhoff's law.
arXiv Detail & Related papers (2023-12-08T00:05:13Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - A Practical Adversarial Attack on Contingency Detection of Smart Energy
Systems [0.0]
We propose an innovative adversarial attack model that can practically compromise dynamical controls of energy system.
We also optimize the deployment of the proposed adversarial attack model by employing deep reinforcement learning (RL) techniques.
arXiv Detail & Related papers (2021-09-13T23:11:56Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.