Deep learning architectures for inference of AC-OPF solutions
- URL: http://arxiv.org/abs/2011.03352v2
- Date: Tue, 1 Dec 2020 10:03:18 GMT
- Title: Deep learning architectures for inference of AC-OPF solutions
- Authors: Thomas Falconer and Letif Mones
- Abstract summary: We present a systematic comparison between neural network (NN) architectures for inference of AC-OPF solutions.
We demonstrate the efficacy of leveraging network topology in the models by constructing abstract representations of electrical grids in the graph domain.
- Score: 0.4061135251278187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a systematic comparison between neural network (NN) architectures
for inference of AC-OPF solutions. Using fully connected NNs as a baseline we
demonstrate the efficacy of leveraging network topology in the models by
constructing abstract representations of electrical grids in the graph domain,
for both convolutional and graph NNs. The performance of the NN architectures
is compared for regression (predicting optimal generator set-points) and
classification (predicting the active set of constraints) settings.
Computational gains for obtaining optimal solutions are also presented.
Related papers
- Distance Recomputator and Topology Reconstructor for Graph Neural Networks [22.210886585639063]
We introduce Distance Recomputator and Topology Reconstructor methodologies, aimed at enhancing Graph Neural Networks (GNNs)
The Distance Recomputator dynamically recalibrates node distances using a dynamic encoding scheme, thereby improving the accuracy and adaptability of node representations.
The Topology Reconstructor adjusts local graph structures based on computed "similarity distances," optimizing network configurations for improved learning outcomes.
arXiv Detail & Related papers (2024-06-25T05:12:51Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Topology-aware Graph Neural Networks for Learning Feasible and Adaptive
ac-OPF Solutions [18.63828570982923]
We develop a new topology-informed graph neural network (GNN) approach for predicting the optimal solutions of ac-OPF problem.
To incorporate grid topology to the NN model, the proposed GNN-for-OPF framework exploits the locality property of locational marginal prices and voltage magnitude.
The advantages of our proposed designs include reduced model complexity, improved generalizability and feasibility guarantees.
arXiv Detail & Related papers (2022-05-16T23:36:37Z) - Neural Structured Prediction for Inductive Node Classification [29.908759584092167]
This paper studies node classification in the inductive setting, aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs.
We present a new approach called the Structured Proxy Network (SPN), which combines the advantages of both worlds.
arXiv Detail & Related papers (2022-04-15T15:50:27Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - Graph-based Algorithm Unfolding for Energy-aware Power Allocation in
Wireless Networks [27.600081147252155]
We develop a novel graph sumable framework to maximize energy efficiency in wireless communication networks.
We show the permutation training which is a desirable property for models of wireless network data.
Results demonstrate its generalizability across different network topologies.
arXiv Detail & Related papers (2022-01-27T20:23:24Z) - Leveraging power grid topology in machine learning assisted optimal
power flow [0.5076419064097734]
Machine learning assisted optimal power flow (OPF) aims to reduce the computational complexity of non-linear and non- constrained power flow problems.
We assess the performance of a variety of FCNN, CNN and GNN models for two fundamental approaches to machine assisted OPF.
For several synthetic grids with interconnected utilities, we show that locality properties between feature and target variables are scarce.
arXiv Detail & Related papers (2021-10-01T10:39:53Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation [86.35434065681925]
This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
arXiv Detail & Related papers (2020-10-02T11:40:09Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.