Blackout Mitigation via Physics-guided RL
- URL: http://arxiv.org/abs/2401.09640v2
- Date: Wed, 31 Jul 2024 19:57:23 GMT
- Title: Blackout Mitigation via Physics-guided RL
- Authors: Anmol Dwivedi, Santiago Paternain, Ali Tajer,
- Abstract summary: This paper considers the sequential design of remedial control actions in response to system anomalies for the ultimate objective of preventing blackouts.
A physics-guided reinforcement learning framework is designed to identify effective sequences of real-time remedial look-ahead decisions.
- Score: 17.807967857394406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper considers the sequential design of remedial control actions in response to system anomalies for the ultimate objective of preventing blackouts. A physics-guided reinforcement learning (RL) framework is designed to identify effective sequences of real-time remedial look-ahead decisions accounting for the long-term impact on the system's stability. The paper considers a space of control actions that involve both discrete-valued transmission line-switching decisions (line reconnections and removals) and continuous-valued generator adjustments. To identify an effective blackout mitigation policy, a physics-guided approach is designed that uses power-flow sensitivity factors associated with the power transmission network to guide the RL exploration during agent training. Comprehensive empirical evaluations using the open-source Grid2Op platform demonstrate the notable advantages of incorporating physical signals into RL decisions, establishing the gains of the proposed physics-guided approach compared to its black box counterparts. One important observation is that strategically~\emph{removing} transmission lines, in conjunction with multiple real-time generator adjustments, often renders effective long-term decisions that are likely to prevent or delay blackouts.
Related papers
- Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - Model-Free Load Frequency Control of Nonlinear Power Systems Based on
Deep Reinforcement Learning [29.643278858113266]
This paper proposes a model-free LFC method for nonlinear power systems based on deep deterministic policy gradient (DDPG) framework.
The controller can generate appropriate control actions and has strong adaptability for nonlinear power systems.
arXiv Detail & Related papers (2024-03-07T10:06:46Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Accountability in Offline Reinforcement Learning: Explaining Decisions
with a Corpus of Examples [70.84093873437425]
This paper introduces the Accountable Offline Controller (AOC) that employs the offline dataset as the Decision Corpus.
AOC operates effectively in low-data scenarios, can be extended to the strictly offline imitation setting, and displays qualities of both conservation and adaptability.
We assess AOC's performance in both simulated and real-world healthcare scenarios, emphasizing its capability to manage offline control tasks with high levels of performance while maintaining accountability.
arXiv Detail & Related papers (2023-10-11T17:20:32Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - A Constraint Enforcement Deep Reinforcement Learning Framework for
Optimal Energy Storage Systems Dispatch [0.0]
The optimal dispatch of energy storage systems (ESSs) presents formidable challenges due to fluctuations in dynamic prices, demand consumption, and renewable-based energy generation.
By exploiting the generalization capabilities of deep neural networks (DNNs), deep reinforcement learning (DRL) algorithms can learn good-quality control models that adaptively respond to distribution networks' nature.
We propose a DRL framework that effectively handles continuous action spaces while strictly enforcing the environments and action space operational constraints during online operation.
arXiv Detail & Related papers (2023-07-26T17:12:04Z) - DA-LSTM: A Dynamic Drift-Adaptive Learning Framework for Interval Load
Forecasting with LSTM Networks [1.3342521220589318]
A drift magnitude threshold should be defined to design change detection methods to identify drifts.
We propose a dynamic drift-adaptive Long Short-Term Memory (DA-LSTM) framework that can improve the performance of load forecasting models.
arXiv Detail & Related papers (2023-05-15T16:26:03Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Reinforcement Learning based Proactive Control for Transmission Grid
Resilience to Wildfire [2.944988240451469]
Power system operation during wildfires require resiliency-driven proactive control.
We introduce an integrated testbed-temporal wildfire propagation and proactive power-system operation.
Our results show that the proposed approach can help the operator to reduce load loss during an extreme event.
arXiv Detail & Related papers (2021-07-12T22:04:12Z) - Non-stationary Online Learning with Memory and Non-stochastic Control [71.14503310914799]
We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions.
In this paper, we introduce dynamic policy regret as the performance measure to design algorithms robust to non-stationary environments.
We propose a novel algorithm for OCO with memory that provably enjoys an optimal dynamic policy regret in terms of time horizon, non-stationarity measure, and memory length.
arXiv Detail & Related papers (2021-02-07T09:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.