Leveraging power grid topology in machine learning assisted optimal
power flow
- URL: http://arxiv.org/abs/2110.00306v1
- Date: Fri, 1 Oct 2021 10:39:53 GMT
- Title: Leveraging power grid topology in machine learning assisted optimal
power flow
- Authors: Thomas Falconer and Letif Mones
- Abstract summary: Machine learning assisted optimal power flow (OPF) aims to reduce the computational complexity of non-linear and non- constrained power flow problems.
We assess the performance of a variety of FCNN, CNN and GNN models for two fundamental approaches to machine assisted OPF.
For several synthetic grids with interconnected utilities, we show that locality properties between feature and target variables are scarce.
- Score: 0.5076419064097734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning assisted optimal power flow (OPF) aims to reduce the
computational complexity of these non-linear and non-convex constrained
optimisation problems by consigning expensive (online) optimisation to offline
training. The majority of work in this area typically employs fully-connected
neural networks (FCNN). However, recently convolutional (CNN) and graph (GNN)
neural networks have been also investigated, in effort to exploit topological
information within the power grid. Although promising results have been
obtained, there lacks a systematic comparison between these architectures
throughout literature. Accordingly, we assess the performance of a variety of
FCNN, CNN and GNN models for two fundamental approaches to machine learning
assisted OPF: regression (predicting optimal generator set-points) and
classification (predicting the active set of constraints). For several
synthetic grids with interconnected utilities, we show that locality properties
between feature and target variables are scarce, hence find limited merit of
harnessing topological information in NN models for this set of problems.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - Topology-aware Graph Neural Networks for Learning Feasible and Adaptive
ac-OPF Solutions [18.63828570982923]
We develop a new topology-informed graph neural network (GNN) approach for predicting the optimal solutions of ac-OPF problem.
To incorporate grid topology to the NN model, the proposed GNN-for-OPF framework exploits the locality property of locational marginal prices and voltage magnitude.
The advantages of our proposed designs include reduced model complexity, improved generalizability and feasibility guarantees.
arXiv Detail & Related papers (2022-05-16T23:36:37Z) - Power Flow Balancing with Decentralized Graph Neural Networks [4.812718493682454]
We propose an end-to-end framework based on a Graph Neural Network (GNN) to balance the power flows in a generic grid.
The proposed framework is efficient and, compared to other solvers based on deep learning, is robust to perturbations not only to the physical quantities on the grid components, but also to the topology.
arXiv Detail & Related papers (2021-11-03T12:14:56Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Deep learning architectures for inference of AC-OPF solutions [0.4061135251278187]
We present a systematic comparison between neural network (NN) architectures for inference of AC-OPF solutions.
We demonstrate the efficacy of leveraging network topology in the models by constructing abstract representations of electrical grids in the graph domain.
arXiv Detail & Related papers (2020-11-06T13:33:18Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.