Multi-fidelity power flow solver
- URL: http://arxiv.org/abs/2205.13362v1
- Date: Thu, 26 May 2022 13:43:26 GMT
- Title: Multi-fidelity power flow solver
- Authors: Sam Yang, Bjorn Vaagensmith, Deepika Patra, Ryan Hruska, Tyler
Phillips
- Abstract summary: The proposed model comprises two networks -- the first one trained on DC approximation as low-fidelity data and the second one trained on both low- and high-fidelity power flow data.
We tested the model on 14- and 118-bus test cases and evaluated its performance based on the $n-k$ power flow prediction accuracy with respect to imbalanced contingency data and high-to-low-fidelity sample ratio.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a multi-fidelity neural network (MFNN) tailored for rapid
high-dimensional grid power flow simulations and contingency analysis with
scarce high-fidelity contingency data. The proposed model comprises two
networks -- the first one trained on DC approximation as low-fidelity data and
coupled to a high-fidelity neural net trained on both low- and high-fidelity
power flow data. Each network features a latent module which parametrizes the
model by a discrete grid topology vector for generalization (e.g., $n$ power
lines with $k$ disconnections or contingencies, if any), and the targeted
high-fidelity output is a weighted sum of linear and nonlinear functions. We
tested the model on 14- and 118-bus test cases and evaluated its performance
based on the $n-k$ power flow prediction accuracy with respect to imbalanced
contingency data and high-to-low-fidelity sample ratio. The results presented
herein demonstrate MFNN's potential and its limits with up to two orders of
magnitude faster and more accurate power flow solutions than DC approximation.
Related papers
- Multi-Fidelity Bayesian Neural Network for Uncertainty Quantification in Transonic Aerodynamic Loads [0.0]
This paper implements a multi-fidelity Bayesian neural network model that applies transfer learning to fuse data generated by models at different fidelities.
The results demonstrate that the multi-fidelity Bayesian model outperforms the state-of-the-art Co-Kriging in terms of overall accuracy and robustness on unseen data.
arXiv Detail & Related papers (2024-07-08T07:34:35Z) - Residual Multi-Fidelity Neural Network Computing [0.0]
We present a residual multi-fidelity computational framework that formulates the correlation between models as a residual function.
We show that dramatic savings in computational cost may be achieved when the output predictions are desired to be accurate within small tolerances.
arXiv Detail & Related papers (2023-10-05T14:43:16Z) - Graph Neural Network-based Power Flow Model [0.42970700836450487]
A graph neural network (GNN) model is trained using historical power system data to predict power flow outcomes.
A comprehensive performance analysis is conducted, comparing the proposed GNN-based power flow model with the traditional DC power flow model.
arXiv Detail & Related papers (2023-07-05T06:09:25Z) - Predicting Dynamic Stability from Static Features in Power Grid Models
using Machine Learning [0.0]
We propose a combination of network science metrics and machine learning models to predict the risk of desynchronisation events.
We train and test such models on simulated data from several synthetic test grids.
We find that the integrated models are capable of predicting desynchronisation events with an average precision greater than $0.996$ when averaging over all data sets.
arXiv Detail & Related papers (2022-10-17T17:16:48Z) - Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design [68.86220939532373]
The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
arXiv Detail & Related papers (2022-07-19T16:37:24Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - High-Fidelity Machine Learning Approximations of Large-Scale Optimal
Power Flow [49.2540510330407]
AC-OPF is a key building block in many power system applications.
Motivated by increased penetration of renewable sources, this paper explores deep learning to deliver efficient approximations to the AC-OPF.
arXiv Detail & Related papers (2020-06-29T20:22:16Z) - Multi-fidelity Generative Deep Learning Turbulent Flows [0.0]
In computational fluid dynamics, there is an inevitable trade off between accuracy and computational cost.
In this work, a novel multi-fidelity deep generative model is introduced for the surrogate modeling of high-fidelity turbulent flow fields.
The resulting surrogate is able to generate physically accurate turbulent realizations at a computational cost magnitudes lower than that of a high-fidelity simulation.
arXiv Detail & Related papers (2020-06-08T16:37:48Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Toward fast and accurate human pose estimation via soft-gated skip
connections [97.06882200076096]
This paper is on highly accurate and highly efficient human pose estimation.
We re-analyze this design choice in the context of improving both the accuracy and the efficiency over the state-of-the-art.
Our model achieves state-of-the-art results on the MPII and LSP datasets.
arXiv Detail & Related papers (2020-02-25T18:51:51Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.