Reinforcement Learning based Proactive Control for Transmission Grid
Resilience to Wildfire
- URL: http://arxiv.org/abs/2107.05756v1
- Date: Mon, 12 Jul 2021 22:04:12 GMT
- Title: Reinforcement Learning based Proactive Control for Transmission Grid
Resilience to Wildfire
- Authors: Salah U. Kadir, Subir Majumder, Ajay D. Chhokra, Abhishek Dubey,
Himanshu Neema, Aron Laszka, Anurag K. Srivastava
- Abstract summary: Power system operation during wildfires require resiliency-driven proactive control.
We introduce an integrated testbed-temporal wildfire propagation and proactive power-system operation.
Our results show that the proposed approach can help the operator to reduce load loss during an extreme event.
- Score: 2.944988240451469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Power grid operation subject to an extreme event requires decision-making by
human operators under stressful condition with high cognitive load. Decision
support under adverse dynamic events, specially if forecasted, can be
supplemented by intelligent proactive control. Power system operation during
wildfires require resiliency-driven proactive control for load shedding, line
switching and resource allocation considering the dynamics of the wildfire and
failure propagation. However, possible number of line- and load-switching in a
large system during an event make traditional prediction-driven and stochastic
approaches computationally intractable, leading operators to often use greedy
algorithms. We model and solve the proactive control problem as a Markov
decision process and introduce an integrated testbed for spatio-temporal
wildfire propagation and proactive power-system operation. We transform the
enormous wildfire-propagation observation space and utilize it as part of a
heuristic for proactive de-energization of transmission assets. We integrate
this heuristic with a reinforcement-learning based proactive policy for
controlling the generating assets. Our approach allows this controller to
provide setpoints for a part of the generation fleet, while a myopic operator
can determine the setpoints for the remaining set, which results in a symbiotic
action. We evaluate our approach utilizing the IEEE 24-node system mapped on a
hypothetical terrain. Our results show that the proposed approach can help the
operator to reduce load loss during an extreme event, reduce power flow through
lines that are to be de-energized, and reduce the likelihood of infeasible
power-flow solutions, which would indicate violation of short-term thermal
limits of transmission lines.
Related papers
- Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - A Safe Reinforcement Learning Algorithm for Supervisory Control of Power
Plants [7.1771300511732585]
Model-free Reinforcement learning (RL) has emerged as a promising solution for control tasks.
We propose a chance-constrained RL algorithm based on Proximal Policy Optimization for supervisory control.
Our approach achieves the smallest distance of violation and violation rate in a load-follow maneuver for an advanced Nuclear Power Plant design.
arXiv Detail & Related papers (2024-01-23T17:52:49Z) - Blackout Mitigation via Physics-guided RL [17.807967857394406]
This paper considers the sequential design of remedial control actions in response to system anomalies for the ultimate objective of preventing blackouts.
A physics-guided reinforcement learning framework is designed to identify effective sequences of real-time remedial look-ahead decisions.
arXiv Detail & Related papers (2024-01-17T23:27:36Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Adversarially Robust Learning for Security-Constrained Optimal Power
Flow [55.816266355623085]
We tackle the problem of N-k security-constrained optimal power flow (SCOPF)
N-k SCOPF is a core problem for the operation of electrical grids.
Inspired by methods in adversarially robust training, we frame N-k SCOPF as a minimax optimization problem.
arXiv Detail & Related papers (2021-11-12T22:08:10Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Secondary control activation analysed and predicted with explainable AI [0.0]
We establish an explainable machine learning model for the activation of secondary control power in Germany.
Our analysis reveals drivers that lead to high reserve requirements in the German power system.
arXiv Detail & Related papers (2021-09-10T11:39:53Z) - Learning-based decentralized offloading decision making in an
adversarial environment [1.9978675755638664]
Vehicular fog computing (VFC) pushes the cloud computing capability to the distributed fog nodes at the edge of the Internet.
In this article, we develop a new adversarial online algorithm with bandit feedback based on the adversarial multi-armed bandit theory.
We theoretically prove that the input-size dependent selection rule allows to choose a suitable fog node without exploring the sub-optimal actions.
arXiv Detail & Related papers (2021-04-26T19:04:55Z) - Multi-Stage Transmission Line Flow Control Using Centralized and
Decentralized Reinforcement Learning Agents [4.371363189163314]
The power grid flow control problem is formulated as Markov Decision Process (MDP)
The effectiveness of the proposed approach is verified on a series of actual planning cases used for operating the power grid of SGCC Zhejiang Electric Power Company.
arXiv Detail & Related papers (2021-02-16T19:54:30Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.