Graph Neural Network-Accelerated Network-Reconfigured Optimal Power Flow
- URL: http://arxiv.org/abs/2410.17460v1
- Date: Tue, 22 Oct 2024 22:35:09 GMT
- Title: Graph Neural Network-Accelerated Network-Reconfigured Optimal Power Flow
- Authors: Thuan Pham, Xingpeng Li,
- Abstract summary: This paper proposes a machine learning (ML)-based approach, particularly utilizing graph neural network (GNN)
The GNN model is trained offline to predict the best topology before entering the optimization stage.
A fast online post-ML selection layer is also proposed to analyze GNN predictions and then select a subset of predicted NR solutions with high confidence.
- Score: 0.24554686192257422
- License:
- Abstract: Optimal power flow (OPF) has been used for real-time grid operations. Prior efforts demonstrated that utilizing flexibility from dynamic topologies will improve grid efficiency. However, this will convert the linear OPF into a mixed-integer linear programming network-reconfigured OPF (NR-OPF) problem, substantially increasing the computing time. Thus, a machine learning (ML)-based approach, particularly utilizing graph neural network (GNN), is proposed to accelerate the solution process. The GNN model is trained offline to predict the best topology before entering the optimization stage. In addition, this paper proposes an offline pre-ML filter layer to reduce GNN model size and training time while improving its accuracy. A fast online post-ML selection layer is also proposed to analyze GNN predictions and then select a subset of predicted NR solutions with high confidence. Case studies have demonstrated superior performance of the proposed GNN-accelerated NR-OPF method augmented with the proposed pre-ML and post-ML layers.
Related papers
- Constraints and Variables Reduction for Optimal Power Flow Using Hierarchical Graph Neural Networks with Virtual Node-Splitting [0.24554686192257422]
Power system networks are often modeled as homogeneous graphs, which limits the ability of graph neural network (GNN) to capture individual generator features at the same nodes.
By introducing the proposed virtual node-splitting strategy, generator-level attributes like costs, limits, and ramp rates can be fully captured by GNN models.
Two-stage adaptive hierarchical GNN is developed to (i) predict critical lines that would be congested, and then (ii) predict base generators that would operate at the maximum capacity.
arXiv Detail & Related papers (2024-11-09T19:46:28Z) - N-1 Reduced Optimal Power Flow Using Augmented Hierarchical Graph Neural
Network [0.2900810893770134]
AHGNN-enabled N-1 ROPF can result in a remarkable reduction in computing time while retaining the solution quality.
Case studies prove the proposed AHGNN and the associated N-1 ROPF are highly effective in reducing computation time while preserving solution quality.
arXiv Detail & Related papers (2024-02-09T07:23:27Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Reduced Optimal Power Flow Using Graph Neural Network [0.5076419064097734]
This paper presents a new method to reduce the number of constraints in the original OPF problem using a graph neural network (GNN)
GNN is an innovative machine learning model that utilizes features from nodes, edges, and network topology to maximize its performance.
It is concluded that the application of GNN for ROPF is able to reduce computing time while retaining solution quality.
arXiv Detail & Related papers (2022-06-27T19:14:47Z) - Topology-aware Graph Neural Networks for Learning Feasible and Adaptive
ac-OPF Solutions [18.63828570982923]
We develop a new topology-informed graph neural network (GNN) approach for predicting the optimal solutions of ac-OPF problem.
To incorporate grid topology to the NN model, the proposed GNN-for-OPF framework exploits the locality property of locational marginal prices and voltage magnitude.
The advantages of our proposed designs include reduced model complexity, improved generalizability and feasibility guarantees.
arXiv Detail & Related papers (2022-05-16T23:36:37Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.