Graph-Enhanced Model-Free Reinforcement Learning Agents for Efficient Power Grid Topological Control
- URL: http://arxiv.org/abs/2503.20688v1
- Date: Wed, 26 Mar 2025 16:20:30 GMT
- Title: Graph-Enhanced Model-Free Reinforcement Learning Agents for Efficient Power Grid Topological Control
- Authors: Eloy Anguiano Batanero, Ángela Fernández, Álvaro Barbero,
- Abstract summary: This paper presents a novel approach within the model-free framework of reinforcement learning, aimed at optimizing power network operations without prior expert knowledge.<n>We demonstrate that our approach achieves a consistent reduction in power losses, while ensuring grid stability against potential blackouts.
- Score: 0.24578723416255752
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing complexity of power grid management, driven by the emergence of prosumers and the demand for cleaner energy solutions, has needed innovative approaches to ensure stability and efficiency. This paper presents a novel approach within the model-free framework of reinforcement learning, aimed at optimizing power network operations without prior expert knowledge. We introduce a masked topological action space, enabling agents to explore diverse strategies for cost reduction while maintaining reliable service using the state logic as a guide for choosing proper actions. Through extensive experimentation across 20 different scenarios in a simulated 5-substation environment, we demonstrate that our approach achieves a consistent reduction in power losses, while ensuring grid stability against potential blackouts. The results underscore the effectiveness of combining dynamic observation formalization with opponent-based training, showing a viable way for autonomous management solutions in modern energy systems or even for building a foundational model for this field.
Related papers
- Learning Topology Actions for Power Grid Control: A Graph-Based Soft-Label Imitation Learning Approach [1.438236614765323]
We introduce a novel Imitation Learning (IL) approach to find suitable grid topologies for congestion management.<n>Unlike traditional IL methods that rely on hard labels to enforce a single optimal action, our method constructs soft labels over actions.<n>To further enhance decision-making, we integrate Graph Neural Networks (GNNs) to encode the structural properties of power grids.
arXiv Detail & Related papers (2025-03-19T13:21:18Z) - Towards Efficient Multi-Objective Optimisation for Real-World Power Grid Topology Control [0.1806830971023738]
We present a two-phase, efficient and scalable Multi-Objective optimisation (MOO) method designed for grid topology control.<n>We validate our approach using historical data from TenneT, a European Transmission System Operator (TSO)<n>Based on current congestion costs and inefficiencies in grid operations, adopting our approach by TSOs could potentially save millions of euros annually.
arXiv Detail & Related papers (2025-01-24T21:40:19Z) - Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Distributed Management of Fluctuating Energy Resources in Dynamic Networked Systems [3.716849174391564]
We study the energy-sharing problem in a system consisting of several DERs.
We model this problem as a bandit convex optimization problem with constraints that correspond to each node's limitations for energy production.
We propose distributed decision-making policies to solve the formulated problem, where we utilize the notion of dynamic regret as the performance metric.
arXiv Detail & Related papers (2024-05-29T11:54:11Z) - DREAM: Decentralized Reinforcement Learning for Exploration and
Efficient Energy Management in Multi-Robot Systems [14.266876062352424]
Resource-constrained robots often suffer from energy inefficiencies, underutilized computational abilities due to inadequate task allocation, and a lack of robustness in dynamic environments.
This paper introduces DREAM - Decentralized Reinforcement Learning for Exploration and Efficient Energy Management in Multi-Robot Systems.
arXiv Detail & Related papers (2023-09-29T17:43:41Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Solving AC Power Flow with Graph Neural Networks under Realistic
Constraints [3.114162328765758]
We propose a graph neural network architecture to solve the AC power flow problem under realistic constraints.
In our approach, we demonstrate the development of a framework that uses graph neural networks to learn the physical constraints of the power flow.
arXiv Detail & Related papers (2022-04-14T14:49:34Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Efficient Empowerment Estimation for Unsupervised Stabilization [75.32013242448151]
empowerment principle enables unsupervised stabilization of dynamical systems at upright positions.
We propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel.
We show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images.
arXiv Detail & Related papers (2020-07-14T21:10:16Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.