Transmission Interface Power Flow Adjustment: A Deep Reinforcement Learning Approach based on Multi-task Attribution Map
- URL: http://arxiv.org/abs/2405.15831v1
- Date: Fri, 24 May 2024 08:20:53 GMT
- Title: Transmission Interface Power Flow Adjustment: A Deep Reinforcement Learning Approach based on Multi-task Attribution Map
- Authors: Shunyu Liu, Wei Luo, Yanzhen Zhou, Kaixuan Chen, Quan Zhang, Huating Xu, Qinglai Guo, Mingli Song,
- Abstract summary: We introduce a novel data-driven deep reinforcement learning (DRL) approach to handle multiple power flow adjustment tasks jointly.
At the heart of the proposed method is a multi-task attribution map (MAM), which enables the DRL agent to explicitly attribute each transmission interface task to different power system nodes.
Based on this MAM, the agent can further provide effective strategies to solve the multi-task adjustment problem with a near-optimal operation cost.
- Score: 33.929818014940054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transmission interface power flow adjustment is a critical measure to ensure the security and economy operation of power systems. However, conventional model-based adjustment schemes are limited by the increasing variations and uncertainties occur in power systems, where the adjustment problems of different transmission interfaces are often treated as several independent tasks, ignoring their coupling relationship and even leading to conflict decisions. In this paper, we introduce a novel data-driven deep reinforcement learning (DRL) approach, to handle multiple power flow adjustment tasks jointly instead of learning each task from scratch. At the heart of the proposed method is a multi-task attribution map (MAM), which enables the DRL agent to explicitly attribute each transmission interface task to different power system nodes with task-adaptive attention weights. Based on this MAM, the agent can further provide effective strategies to solve the multi-task adjustment problem with a near-optimal operation cost. Simulation results on the IEEE 118-bus system, a realistic 300-bus system in China, and a very large European system with 9241 buses demonstrate that the proposed method significantly improves the performance compared with several baseline methods, and exhibits high interpretability with the learnable MAM.
Related papers
- Reinforcement Learning with Model Predictive Control for Highway Ramp Metering [14.389086937116582]
This work explores the synergy between model-based and learning-based strategies to enhance traffic flow management.
The control problem is formulated as an RL task by crafting a suitable stage cost function.
An MPC-based RL approach, which leverages the MPC optimal problem as a function approximation for the RL algorithm, is proposed to learn to efficiently control an on-ramp.
arXiv Detail & Related papers (2023-11-15T09:50:54Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Over-the-Air Federated Multi-Task Learning via Model Sparsification and
Turbo Compressed Sensing [48.19771515107681]
We propose an over-the-air FMTL framework, where multiple learning tasks deployed on edge devices share a non-orthogonal fading channel under the coordination of an edge server.
In OA-FMTL, the local updates of edge devices are sparsified, compressed, and then sent over the uplink channel in a superimposed fashion.
We analyze the performance of the proposed OA-FMTL framework together with the M-Turbo-CS algorithm.
arXiv Detail & Related papers (2022-05-08T08:03:52Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - A Modular and Transferable Reinforcement Learning Framework for the
Fleet Rebalancing Problem [2.299872239734834]
We propose a modular framework for fleet rebalancing based on model-free reinforcement learning (RL)
We formulate RL state and action spaces as distributions over a grid of the operating area, making the framework scalable.
Numerical experiments, using real-world trip and network data, demonstrate that this approach has several distinct advantages over baseline methods.
arXiv Detail & Related papers (2021-05-27T16:32:28Z) - Joint Resource Management for MC-NOMA: A Deep Reinforcement Learning
Approach [39.54978539962088]
This paper presents a novel and effective deep reinforcement learning (DRL)-based approach to addressing joint resource management (JRM)
In a practical multi-carrier non-orthogonal multiple access (MC-NOMA) system, hardware sensitivity and imperfect successive interference cancellation (SIC) are considered.
We show that the proposed DRL-JRM scheme is superior to existing alternatives in terms of system throughput and resistance to interference.
arXiv Detail & Related papers (2021-03-29T06:52:19Z) - UPDeT: Universal Multi-agent Reinforcement Learning via Policy
Decoupling with Transformers [108.92194081987967]
We make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing one single architecture to fit tasks.
Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy.
The proposed model, named as Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task's decision process more explainable.
arXiv Detail & Related papers (2021-01-20T07:24:24Z) - Controllable Pareto Multi-Task Learning [55.945680594691076]
A multi-task learning system aims at solving multiple related tasks at the same time.
With a fixed model capacity, the tasks would be conflicted with each other, and the system usually has to make a trade-off among learning all of them together.
This work proposes a novel controllable multi-task learning framework, to enable the system to make real-time trade-off control among different tasks with a single model.
arXiv Detail & Related papers (2020-10-13T11:53:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.