Deep Reinforcement Learning for Optimal Power Flow with Renewables Using
Spatial-Temporal Graph Information
- URL: http://arxiv.org/abs/2112.11461v1
- Date: Wed, 22 Dec 2021 03:58:13 GMT
- Title: Deep Reinforcement Learning for Optimal Power Flow with Renewables Using
Spatial-Temporal Graph Information
- Authors: Jinhao Li and Ruichang Zhang and Hao Wang and Zhi Liu and Hongyang Lai
and Yanru Zhang
- Abstract summary: Renewable energy resources (RERs) have been increasingly integrated into modern power systems, especially in large-scale distribution networks (DNs)
We propose a deep reinforcement learning (DRL)-based approach to dynamically search for the optimal operation point in DNs with a high uptake of RERs.
- Score: 11.76597661670075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Renewable energy resources (RERs) have been increasingly integrated into
modern power systems, especially in large-scale distribution networks (DNs). In
this paper, we propose a deep reinforcement learning (DRL)-based approach to
dynamically search for the optimal operation point, i.e., optimal power flow
(OPF), in DNs with a high uptake of RERs. Considering uncertainties and voltage
fluctuation issues caused by RERs, we formulate OPF into a multi-objective
optimization (MOO) problem. To solve the MOO problem, we develop a novel DRL
algorithm leveraging the graphical information of the distribution network.
Specifically, we employ the state-of-the-art DRL algorithm, i.e., deep
deterministic policy gradient (DDPG), to learn an optimal strategy for OPF.
Since power flow reallocation in the DN is a consecutive process, where nodes
are self-correlated and interrelated in temporal and spatial views, to make
full use of DNs' graphical information, we develop a multi-grained
attention-based spatial-temporal graph convolution network (MG-ASTGCN) for
spatial-temporal graph information extraction, preparing for its sequential
DDPG. We validate our proposed DRL-based approach in modified IEEE 33, 69, and
118-bus radial distribution systems (RDSs) and show that our DRL-based approach
outperforms other benchmark algorithms. Our experimental results also reveal
that MG-ASTGCN can significantly accelerate the DDPG training process and
improve DDPG's capability in reallocating power flow for OPF. The proposed
DRL-based approach also promotes DNs' stability in the presence of node faults,
especially for large-scale DNs.
Related papers
- RL-ADN: A High-Performance Deep Reinforcement Learning Environment for Optimal Energy Storage Systems Dispatch in Active Distribution Networks [0.0]
Deep Reinforcement Learning (DRL) presents a promising avenue for optimizing Energy Storage Systems (ESSs) dispatch in distribution networks.
This paper introduces RL-ADN, an innovative open-source library specifically designed for solving the optimal ESSs dispatch in active distribution networks.
arXiv Detail & Related papers (2024-08-07T10:53:07Z) - Joint Admission Control and Resource Allocation of Virtual Network Embedding via Hierarchical Deep Reinforcement Learning [69.00997996453842]
We propose a deep Reinforcement Learning approach to learn a joint Admission Control and Resource Allocation policy for virtual network embedding.
We show that HRL-ACRA outperforms state-of-the-art baselines in terms of both the acceptance ratio and long-term average revenue.
arXiv Detail & Related papers (2024-06-25T07:42:30Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Intelligent Hybrid Resource Allocation in MEC-assisted RAN Slicing Network [72.2456220035229]
We aim to maximize the SSR for heterogeneous service demands in the cooperative MEC-assisted RAN slicing system.
We propose a recurrent graph reinforcement learning (RGRL) algorithm to intelligently learn the optimal hybrid RA policy.
arXiv Detail & Related papers (2024-05-02T01:36:13Z) - Decentralized Federated Reinforcement Learning for User-Centric Dynamic
TFDD Control [37.54493447920386]
We propose a learning-based dynamic time-frequency division duplexing (D-TFDD) scheme to meet asymmetric and heterogeneous traffic demands.
We formulate the problem as a decentralized partially observable Markov decision process (Dec-POMDP)
In order to jointly optimize the global resources in a decentralized manner, we propose a federated reinforcement learning (RL) algorithm named Wolpertinger deep deterministic policy gradient (FWDDPG) algorithm.
arXiv Detail & Related papers (2022-11-04T07:39:21Z) - Federated Deep Reinforcement Learning for the Distributed Control of
NextG Wireless Networks [16.12495409295754]
Next Generation (NextG) networks are expected to support demanding internet tactile applications such as augmented reality and connected autonomous vehicles.
Data-driven approaches can improve the ability of the network to adapt to the current operating conditions.
Deep RL (DRL) has been shown to achieve good performance even in complex environments.
arXiv Detail & Related papers (2021-12-07T03:13:20Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Distributed Uplink Beamforming in Cell-Free Networks Using Deep
Reinforcement Learning [25.579612460904873]
We propose several beamforming techniques for an uplink cell-free network with centralized, semi-distributed, and fully distributed processing.
The proposed distributed beamforming technique performs better than the DDPG algorithm with centralized learning only for small-scale networks.
arXiv Detail & Related papers (2020-06-26T17:54:34Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z) - Stacked Auto Encoder Based Deep Reinforcement Learning for Online
Resource Scheduling in Large-Scale MEC Networks [44.40722828581203]
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet of things (IoT) users.
A deep reinforcement learning (DRL) based solution is proposed, which includes the following components.
A preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy.
arXiv Detail & Related papers (2020-01-24T23:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.