PowRL: A Reinforcement Learning Framework for Robust Management of Power
Networks
- URL: http://arxiv.org/abs/2212.02397v2
- Date: Thu, 20 Apr 2023 04:44:07 GMT
- Title: PowRL: A Reinforcement Learning Framework for Robust Management of Power
Networks
- Authors: Anandsingh Chauhan, Mayank Baranwal, Ansuma Basumatary
- Abstract summary: This paper presents a reinforcement learning framework, PowRL, to mitigate the effects of unexpected network events.
PowRL is benchmarked on a variety of competition datasets hosted by the L2RPN (Learning to Run a Power Network)
- Score: 2.9822184411723645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Power grids, across the world, play an important societal and economical role
by providing uninterrupted, reliable and transient-free power to several
industries, businesses and household consumers. With the advent of renewable
power resources and EVs resulting into uncertain generation and highly dynamic
load demands, it has become ever so important to ensure robust operation of
power networks through suitable management of transient stability issues and
localize the events of blackouts. In the light of ever increasing stress on the
modern grid infrastructure and the grid operators, this paper presents a
reinforcement learning (RL) framework, PowRL, to mitigate the effects of
unexpected network events, as well as reliably maintain electricity everywhere
on the network at all times. The PowRL leverages a novel heuristic for overload
management, along with the RL-guided decision making on optimal topology
selection to ensure that the grid is operated safely and reliably (with no
overloads). PowRL is benchmarked on a variety of competition datasets hosted by
the L2RPN (Learning to Run a Power Network). Even with its reduced action
space, PowRL tops the leaderboard in the L2RPN NeurIPS 2020 challenge
(Robustness track) at an aggregate level, while also being the top performing
agent in the L2RPN WCCI 2020 challenge. Moreover, detailed analysis depicts
state-of-the-art performances by the PowRL agent in some of the test scenarios.
Related papers
- Unsupervised-to-Online Reinforcement Learning [59.910638327123394]
Unsupervised-to-online RL (U2O RL) replaces domain-specific supervised offline RL with unsupervised offline RL.
U2O RL not only enables reusing a single pre-trained model for multiple downstream tasks, but also learns better representations.
We empirically demonstrate that U2O RL achieves strong performance that matches or even outperforms previous offline-to-online RL approaches.
arXiv Detail & Related papers (2024-08-27T05:23:45Z) - SafePowerGraph: Safety-aware Evaluation of Graph Neural Networks for Transmission Power Grids [55.35059657148395]
We present SafePowerGraph, the first simulator-agnostic, safety-oriented framework and benchmark for Graph Neural Networks (GNNs) in power systems (PS) operations.
SafePowerGraph integrates multiple PF and OPF simulators and assesses GNN performance under diverse scenarios, including energy price variations and power line outages.
arXiv Detail & Related papers (2024-07-17T09:01:38Z) - Multi-Agent Reinforcement Learning for Power Grid Topology Optimization [45.74830585715129]
This paper presents a hierarchical multi-agent reinforcement learning (MARL) framework tailored for expansive action spaces.
Experimental results indicate the MARL framework's competitive performance with single-agent RL methods.
We also compare different RL algorithms for lower-level agents alongside different policies for higher-order agents.
arXiv Detail & Related papers (2023-10-04T06:37:43Z) - Reinforcement Learning Based Power Grid Day-Ahead Planning and
AI-Assisted Control [0.27998963147546135]
We introduce a congestion management approach consisting of a redispatching agent and a machine learning-based optimization agent.
Compared to a typical redispatching-only agent, it was able to keep a simulated grid in operation longer while at the same time reducing operational cost.
The aim of this paper is to bring this promising technology closer to the real world of power grid operation.
arXiv Detail & Related papers (2023-02-15T13:38:40Z) - MERLIN: Multi-agent offline and transfer learning for occupant-centric
energy flexible operation of grid-interactive communities using smart meter
data and CityLearn [0.0]
Decarbonization of buildings presents new challenges for the reliability of the electrical grid.
We propose the MERLIN framework and use a digital twin of a real-world grid-interactive residential community in CityLearn.
We show that independent RL-controllers for batteries improve building and district level compared to a reference by tailoring their policies to individual buildings.
arXiv Detail & Related papers (2022-12-31T21:37:14Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Reinforcement learning for Energies of the future and carbon neutrality:
a Challenge Design [0.0]
This challenge belongs to a series started in 2019 under the name "Learning to run a power network" (L2RPN)
We introduce new more realistic scenarios proposed by RTE to reach carbon neutrality by 2050.
We provide a baseline using state-of-the-art reinforcement learning algorithm to stimulate the future participants.
arXiv Detail & Related papers (2022-07-21T06:56:46Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Learning to run a Power Network Challenge: a Retrospective Analysis [6.442347402316506]
We have designed a L2RPN challenge to encourage the development of reinforcement learning solutions to key problems in the next-generation power networks.
The main contribution of this challenge is our proposed comprehensive Grid2Op framework, and associated benchmark.
We present the benchmark suite and analyse the winning solutions of the challenge, observing one super-human performance demonstration by the best agent.
arXiv Detail & Related papers (2021-03-02T09:52:24Z) - Cognitive Radio Network Throughput Maximization with Deep Reinforcement
Learning [58.44609538048923]
Radio Frequency powered Cognitive Radio Networks (RF-CRN) are likely to be the eyes and ears of upcoming modern networks such as Internet of Things (IoT)
To be considered autonomous, the RF-powered network entities need to make decisions locally to maximize the network throughput under the uncertainty of any network environment.
In this paper, deep reinforcement learning is proposed to overcome the shortcomings and allow a wireless gateway to derive an optimal policy to maximize network throughput.
arXiv Detail & Related papers (2020-07-07T01:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.