Machine-Learning Driven Load Shedding to Mitigate Instability Attacks in Power Grids
- URL: http://arxiv.org/abs/2509.26532v2
- Date: Thu, 09 Oct 2025 16:55:13 GMT
- Title: Machine-Learning Driven Load Shedding to Mitigate Instability Attacks in Power Grids
- Authors: Justin Tackett, Benjamin Francis, Luis Garcia, David Grimsman, Sean Warnick,
- Abstract summary: This work focuses on instability attacks on the power grid.<n>A standard mitigation approach is load-shedding: the system operator chooses a set of loads to shut off until the situation is resolved.<n>This paper addresses this problem using a data-driven methodology for load shedding decisions.
- Score: 1.4602363426887834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Critical infrastructures are becoming increasingly complex as our society becomes increasingly dependent on them. This complexity opens the door to new possibilities for attacks and a need for new defense strategies. Our work focuses on instability attacks on the power grid, wherein an attacker causes cascading outages by introducing unstable dynamics into the system. When stress is place on the power grid, a standard mitigation approach is load-shedding: the system operator chooses a set of loads to shut off until the situation is resolved. While this technique is standard, there is no systematic approach to choosing which loads will stop an instability attack. This paper addresses this problem using a data-driven methodology for load shedding decisions. We show a proof of concept on the IEEE 14 Bus System using the Achilles Heel Technologies Power Grid Analyzer, and show through an implementation of modified Prony analysis (MPA) that MPA is a viable method for detecting instability attacks and triggering defense mechanisms.
Related papers
- Optimal Planning for Enhancing the Resilience of Modern Distribution Systems Against Cyberattacks [1.9499120576896232]
The integration of IoT-connected devices in smart grids has introduced new vulnerabilities at the distribution level.<n>These include cyberattacks that exploit high-wattage IoT devices, such as EV chargers, to manipulate local demand and destabilize the grid.<n>This research highlights the urgent need for distribution-level cyber resilience planning in smart grids.
arXiv Detail & Related papers (2025-07-29T20:44:33Z) - Learning-Enabled Adaptive Voltage Protection Against Load Alteration Attacks On Smart Grids [4.056490085213944]
Cyber-attackers can exploit vulnerabilities in the system that can lead to grid instability and blackouts.
Traditional protection strategies, primarily designed to handle transmission line faults are often inadequate against such threats.
We propose a Deep Reinforcement Learning-based protection system that learns to differentiate any stealthy load alterations from normal grid operations.
arXiv Detail & Related papers (2024-11-21T13:47:01Z) - Smart Grid Security: A Verified Deep Reinforcement Learning Framework to Counter Cyber-Physical Attacks [2.159496955301211]
Smart grids are vulnerable to strategically crafted cyber-physical attacks.
Malicious attacks can manipulate power demands using high-wattage Internet of Things (IoT) botnet devices.
Grid operators overlook potential scenarios of cyber-physical attacks during their design phase.
We propose a safe Deep Reinforcement Learning (DRL)-based framework for mitigating attacks on smart grids.
arXiv Detail & Related papers (2024-09-24T05:26:20Z) - Data Poisoning: An Overlooked Threat to Power Grid Resilience [0.41232474244672235]
We will review the most common types of adversarial disruptions: evasion and poisoning disruptions.
This is due to the underlying assumption that model training is secure, leading to evasion disruptions being the primary type of studied disruption.
We will examine the impacts of data poisoning interventions and showcase how they can endanger power grid resilience.
arXiv Detail & Related papers (2024-07-19T22:00:52Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid [62.91192307098067]
This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
arXiv Detail & Related papers (2024-03-11T02:47:21Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Physics-Constrained Backdoor Attacks on Power System Fault Localization [1.1683938179815823]
This work proposes a novel physics-constrained backdoor poisoning attack.
It embeds the undetectable attack signal into the learned model and only performs the attack when it encounters the corresponding signal.
The proposed attack pipeline can be easily generalized to other power system tasks.
arXiv Detail & Related papers (2022-11-07T12:57:26Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adversarially Robust Learning for Security-Constrained Optimal Power
Flow [55.816266355623085]
We tackle the problem of N-k security-constrained optimal power flow (SCOPF)
N-k SCOPF is a core problem for the operation of electrical grids.
Inspired by methods in adversarially robust training, we frame N-k SCOPF as a minimax optimization problem.
arXiv Detail & Related papers (2021-11-12T22:08:10Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - A Practical Adversarial Attack on Contingency Detection of Smart Energy
Systems [0.0]
We propose an innovative adversarial attack model that can practically compromise dynamical controls of energy system.
We also optimize the deployment of the proposed adversarial attack model by employing deep reinforcement learning (RL) techniques.
arXiv Detail & Related papers (2021-09-13T23:11:56Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.