Multi-Stage Transmission Line Flow Control Using Centralized and
Decentralized Reinforcement Learning Agents
- URL: http://arxiv.org/abs/2102.08430v1
- Date: Tue, 16 Feb 2021 19:54:30 GMT
- Title: Multi-Stage Transmission Line Flow Control Using Centralized and
Decentralized Reinforcement Learning Agents
- Authors: Xiumin Shang and Jinping Yang and Bingquan Zhu and Lin Ye and Jing
Zhang, Jianping Xu and Qin Lyu and Ruisheng Diao
- Abstract summary: The power grid flow control problem is formulated as Markov Decision Process (MDP)
The effectiveness of the proposed approach is verified on a series of actual planning cases used for operating the power grid of SGCC Zhejiang Electric Power Company.
- Score: 4.371363189163314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Planning future operational scenarios of bulk power systems that meet
security and economic constraints typically requires intensive labor efforts in
performing massive simulations. To automate this process and relieve engineers'
burden, a novel multi-stage control approach is presented in this paper to
train centralized and decentralized reinforcement learning agents that can
automatically adjust grid controllers for regulating transmission line flows at
normal condition and under contingencies. The power grid flow control problem
is formulated as Markov Decision Process (MDP). At stage one, centralized soft
actor-critic (SAC) agent is trained to control generator active power outputs
in a wide area to control transmission line flows against specified security
limits. If line overloading issues remain unresolved, stage two is used to
train decentralized SAC agent via load throw-over at local substations. The
effectiveness of the proposed approach is verified on a series of actual
planning cases used for operating the power grid of SGCC Zhejiang Electric
Power Company.
Related papers
- Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - A Safe Reinforcement Learning Algorithm for Supervisory Control of Power
Plants [7.1771300511732585]
Model-free Reinforcement learning (RL) has emerged as a promising solution for control tasks.
We propose a chance-constrained RL algorithm based on Proximal Policy Optimization for supervisory control.
Our approach achieves the smallest distance of violation and violation rate in a load-follow maneuver for an advanced Nuclear Power Plant design.
arXiv Detail & Related papers (2024-01-23T17:52:49Z) - A Scalable Network-Aware Multi-Agent Reinforcement Learning Framework
for Decentralized Inverter-based Voltage Control [9.437235548820505]
This paper addresses the challenges associated with decentralized voltage control in power grids due to an increase in distributed generations (DGs)
Traditional model-based voltage control methods struggle with the rapid energy fluctuations and uncertainties of these DGs.
We propose a scalable network-aware (SNA) framework that leverages network structure to truncate the input to the critic's Q-function.
arXiv Detail & Related papers (2023-12-07T15:42:53Z) - Autonomous Point Cloud Segmentation for Power Lines Inspection in Smart
Grid [56.838297900091426]
An unsupervised Machine Learning (ML) framework is proposed, to detect, extract and analyze the characteristics of power lines of both high and low voltage.
The proposed framework can efficiently detect the power lines and perform PLC-based hazard analysis.
arXiv Detail & Related papers (2023-08-14T17:14:58Z) - Distributed-Training-and-Execution Multi-Agent Reinforcement Learning
for Power Control in HetNet [48.96004919910818]
We propose a multi-agent deep reinforcement learning (MADRL) based power control scheme for the HetNet.
To promote cooperation among agents, we develop a penalty-based Q learning (PQL) algorithm for MADRL systems.
In this way, an agent's policy can be learned by other agents more easily, resulting in a more efficient collaboration process.
arXiv Detail & Related papers (2022-12-15T17:01:56Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Multi-Agent Reinforcement Learning for Active Voltage Control on Power
Distribution Networks [2.992389186393994]
The emerging trend of decarbonisation is placing excessive stress on power distribution networks.
Active voltage control is seen as a promising solution to relieve power congestion and improve voltage quality without extra hardware investment.
This paper formulates the active voltage control problem in the framework of Dec-POMDP and establishes an open-source environment.
arXiv Detail & Related papers (2021-10-27T09:31:22Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Reinforcement Learning based Proactive Control for Transmission Grid
Resilience to Wildfire [2.944988240451469]
Power system operation during wildfires require resiliency-driven proactive control.
We introduce an integrated testbed-temporal wildfire propagation and proactive power-system operation.
Our results show that the proposed approach can help the operator to reduce load loss during an extreme event.
arXiv Detail & Related papers (2021-07-12T22:04:12Z) - Stable Online Control of Linear Time-Varying Systems [49.41696101740271]
COCO-LQ is an efficient online control algorithm that guarantees input-to-state stability for a large class of LTV systems.
We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example.
arXiv Detail & Related papers (2021-04-29T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.