Managing power grids through topology actions: A comparative study
between advanced rule-based and reinforcement learning agents
- URL: http://arxiv.org/abs/2304.00765v2
- Date: Mon, 17 Apr 2023 14:28:36 GMT
- Title: Managing power grids through topology actions: A comparative study
between advanced rule-based and reinforcement learning agents
- Authors: Malte Lehna and Jan Viebahn and Christoph Scholz and Antoine Marot and
Sven Tomforde
- Abstract summary: Operation of electricity grids has become increasingly complex due to the current upheaval and the increase in renewable energy production.
It has been shown that Reinforcement Learning is an efficient and reliable approach with considerable potential for automatic grid operation.
In this article, we analyse the submitted agent from Binbinchen and provide novel strategies to improve the agent, both for the RL and the rule-based approach.
- Score: 1.8549313085249322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The operation of electricity grids has become increasingly complex due to the
current upheaval and the increase in renewable energy production. As a
consequence, active grid management is reaching its limits with conventional
approaches. In the context of the Learning to Run a Power Network challenge, it
has been shown that Reinforcement Learning (RL) is an efficient and reliable
approach with considerable potential for automatic grid operation. In this
article, we analyse the submitted agent from Binbinchen and provide novel
strategies to improve the agent, both for the RL and the rule-based approach.
The main improvement is a N-1 strategy, where we consider topology actions that
keep the grid stable, even if one line is disconnected. More, we also propose a
topology reversion to the original grid, which proved to be beneficial. The
improvements are tested against reference approaches on the challenge test sets
and are able to increase the performance of the rule-based agent by 27%. In
direct comparison between rule-based and RL agent we find similar performance.
However, the RL agent has a clear computational advantage. We also analyse the
behaviour in an exemplary case in more detail to provide additional insights.
Here, we observe that through the N-1 strategy, the actions of the agents
become more diversified.
Related papers
- State and Action Factorization in Power Grids [47.65236082304256]
We propose a domain-agnostic algorithm that estimates correlations between state and action components entirely based on data.
The algorithm is validated on a power grid benchmark obtained with the Grid2Op simulator.
arXiv Detail & Related papers (2024-09-03T15:00:58Z) - Imitation Learning for Intra-Day Power Grid Operation through Topology Actions [0.24578723416255752]
We study the performance of imitation learning for day-ahead power grid operation through topology actions.
We train a fully-connected neural network (FCNN) on expert state-action pairs and evaluate it in two ways.
As a power system agent, the FCNN performs only slightly worse than expert agents.
arXiv Detail & Related papers (2024-07-29T10:34:19Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Multi-Agent Reinforcement Learning for Power Grid Topology Optimization [45.74830585715129]
This paper presents a hierarchical multi-agent reinforcement learning (MARL) framework tailored for expansive action spaces.
Experimental results indicate the MARL framework's competitive performance with single-agent RL methods.
We also compare different RL algorithms for lower-level agents alongside different policies for higher-order agents.
arXiv Detail & Related papers (2023-10-04T06:37:43Z) - Reinforcement Learning for Resilient Power Grids [0.23204178451683263]
Traditional power grid systems have become obsolete under more frequent and extreme natural disasters.
Most power grid simulators and RL interfaces do not support simulation of power grid under large-scale blackouts or when the network is divided into sub-networks.
In this study, we proposed an updated power grid simulator built on Grid2Op, an existing simulator and RL interface, and experimented on limiting the action and observation spaces of Grid2Op.
arXiv Detail & Related papers (2022-12-08T04:40:14Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Curriculum Based Reinforcement Learning of Grid Topology Controllers to
Prevent Thermal Cascading [0.19116784879310028]
This paper describes how domain knowledge of power system operators can be integrated into reinforcement learning frameworks.
A curriculum-based approach with reward tuning is incorporated into the training procedure by modifying the environment.
A parallel training approach on multiple scenarios is employed to avoid biasing the agent to a few scenarios and make it robust to the natural variability in grid operations.
arXiv Detail & Related papers (2021-12-18T20:32:05Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.