Centrally Coordinated Multi-Agent Reinforcement Learning for Power Grid Topology Control
- URL: http://arxiv.org/abs/2502.08681v1
- Date: Wed, 12 Feb 2025 10:16:06 GMT
- Title: Centrally Coordinated Multi-Agent Reinforcement Learning for Power Grid Topology Control
- Authors: Barbera de Mol, Davide Barbieri, Jan Viebahn, Davide Grossi,
- Abstract summary: Action space factorization breaks down decision-making into smaller sub-tasks.
CCMA architecture exhibits higher sample efficiency and superior final performance than the baseline approaches.
Results suggest high potential of the CCMA approach for further application in higher-dimensional L2RPN as well as real-world power grid settings.
- Score: 4.949816699298336
- License:
- Abstract: Power grid operation is becoming more complex due to the increase in generation of renewable energy. The recent series of Learning To Run a Power Network (L2RPN) competitions have encouraged the use of artificial agents to assist human dispatchers in operating power grids. However, the combinatorial nature of the action space poses a challenge to both conventional optimizers and learned controllers. Action space factorization, which breaks down decision-making into smaller sub-tasks, is one approach to tackle the curse of dimensionality. In this study, we propose a centrally coordinated multi-agent (CCMA) architecture for action space factorization. In this approach, regional agents propose actions and subsequently a coordinating agent selects the final action. We investigate several implementations of the CCMA architecture, and benchmark in different experimental settings against various L2RPN baseline approaches. The CCMA architecture exhibits higher sample efficiency and superior final performance than the baseline approaches. The results suggest high potential of the CCMA approach for further application in higher-dimensional L2RPN as well as real-world power grid settings.
Related papers
- State and Action Factorization in Power Grids [47.65236082304256]
We propose a domain-agnostic algorithm that estimates correlations between state and action components entirely based on data.
The algorithm is validated on a power grid benchmark obtained with the Grid2Op simulator.
arXiv Detail & Related papers (2024-09-03T15:00:58Z) - Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach [51.63921041249406]
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)
deploying STAR-RIS indoors presents challenges in interference mitigation, power consumption, and real-time configuration.
A novel network architecture utilizing multiple access points (APs), STAR-RISs, and NOMA is proposed for indoor communication.
arXiv Detail & Related papers (2024-06-19T07:17:04Z) - Multi-Agent Reinforcement Learning for Power Grid Topology Optimization [45.74830585715129]
This paper presents a hierarchical multi-agent reinforcement learning (MARL) framework tailored for expansive action spaces.
Experimental results indicate the MARL framework's competitive performance with single-agent RL methods.
We also compare different RL algorithms for lower-level agents alongside different policies for higher-order agents.
arXiv Detail & Related papers (2023-10-04T06:37:43Z) - Reinforcement Learning Based Power Grid Day-Ahead Planning and
AI-Assisted Control [0.27998963147546135]
We introduce a congestion management approach consisting of a redispatching agent and a machine learning-based optimization agent.
Compared to a typical redispatching-only agent, it was able to keep a simulated grid in operation longer while at the same time reducing operational cost.
The aim of this paper is to bring this promising technology closer to the real world of power grid operation.
arXiv Detail & Related papers (2023-02-15T13:38:40Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Joint Energy Dispatch and Unit Commitment in Microgrids Based on Deep
Reinforcement Learning [6.708717040312532]
In this paper, deep reinforcement learning (DRL) is applied to learn an optimal policy for making joint energy dispatch (ED) and unit commitment (UC) decisions in an isolated microgrid.
We propose a DRL algorithm, i.e., the hybrid action finite-horizon DDPG (HAFH-DDPG), that seamlessly integrates two classical DRL algorithms.
A diesel generator (DG) selection strategy is presented to support a simplified action space for reducing the computation complexity of this algorithm.
arXiv Detail & Related papers (2022-06-03T16:22:03Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Scalable Voltage Control using Structure-Driven Hierarchical Deep
Reinforcement Learning [0.0]
This paper presents a novel hierarchical deep reinforcement learning (DRL) based design for the voltage control of power grids.
We exploit the area-wise division structure of the power system to propose a hierarchical DRL design that can be scaled to the larger grid models.
We train area-wise decentralized RL agents to compute lower-level policies for the individual areas, and concurrently train a higher-level DRL agent that uses the updates of the lower-level policies to efficiently coordinate the control actions taken by the lower-level agents.
arXiv Detail & Related papers (2021-01-29T21:30:59Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.