Multi-fidelity power flow solver
- URL: http://arxiv.org/abs/2205.13362v1
- Date: Thu, 26 May 2022 13:43:26 GMT
- Title: Multi-fidelity power flow solver
- Authors: Sam Yang, Bjorn Vaagensmith, Deepika Patra, Ryan Hruska, Tyler
Phillips
- Abstract summary: The proposed model comprises two networks -- the first one trained on DC approximation as low-fidelity data and the second one trained on both low- and high-fidelity power flow data.
We tested the model on 14- and 118-bus test cases and evaluated its performance based on the $n-k$ power flow prediction accuracy with respect to imbalanced contingency data and high-to-low-fidelity sample ratio.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a multi-fidelity neural network (MFNN) tailored for rapid
high-dimensional grid power flow simulations and contingency analysis with
scarce high-fidelity contingency data. The proposed model comprises two
networks -- the first one trained on DC approximation as low-fidelity data and
coupled to a high-fidelity neural net trained on both low- and high-fidelity
power flow data. Each network features a latent module which parametrizes the
model by a discrete grid topology vector for generalization (e.g., $n$ power
lines with $k$ disconnections or contingencies, if any), and the targeted
high-fidelity output is a weighted sum of linear and nonlinear functions. We
tested the model on 14- and 118-bus test cases and evaluated its performance
based on the $n-k$ power flow prediction accuracy with respect to imbalanced
contingency data and high-to-low-fidelity sample ratio. The results presented
herein demonstrate MFNN's potential and its limits with up to two orders of
magnitude faster and more accurate power flow solutions than DC approximation.
Related papers
- Adaptive Informed Deep Neural Networks for Power Flow Analysis [0.0]
This study introduces PINN4PF, an end-to-end deep learning architecture for power flow (PF) analysis.
Results demonstrate that PINN4PF outperforms both baselines across all test systems.
arXiv Detail & Related papers (2024-12-03T18:33:48Z) - Recurrent Stochastic Configuration Networks with Hybrid Regularization for Nonlinear Dynamics Modelling [3.8719670789415925]
Recurrent configuration networks (RSCNs) have shown great potential in modelling nonlinear dynamic systems with uncertainties.
This paper presents an RSCN with hybrid regularization to enhance both the learning capacity and generalization performance of the network.
arXiv Detail & Related papers (2024-11-26T03:06:39Z) - Multi-Fidelity Bayesian Neural Network for Uncertainty Quantification in Transonic Aerodynamic Loads [0.0]
This paper implements a multi-fidelity Bayesian neural network model that applies transfer learning to fuse data generated by models at different fidelities.
The results demonstrate that the multi-fidelity Bayesian model outperforms the state-of-the-art Co-Kriging in terms of overall accuracy and robustness on unseen data.
arXiv Detail & Related papers (2024-07-08T07:34:35Z) - Residual Multi-Fidelity Neural Network Computing [0.0]
We consider the general problem of constructing a neural network surrogate model using multi-fidelity information.
Motivated by error-complexity estimates for ReLU neural networks, we formulate the correlation between an inexpensive low-fidelity model and an expensive high-fidelity model.
We present four numerical examples to demonstrate the power of the proposed framework.
arXiv Detail & Related papers (2023-10-05T14:43:16Z) - Graph Neural Network-based Power Flow Model [0.42970700836450487]
A graph neural network (GNN) model is trained using historical power system data to predict power flow outcomes.
A comprehensive performance analysis is conducted, comparing the proposed GNN-based power flow model with the traditional DC power flow model.
arXiv Detail & Related papers (2023-07-05T06:09:25Z) - Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design [68.86220939532373]
The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
arXiv Detail & Related papers (2022-07-19T16:37:24Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - High-Fidelity Machine Learning Approximations of Large-Scale Optimal
Power Flow [49.2540510330407]
AC-OPF is a key building block in many power system applications.
Motivated by increased penetration of renewable sources, this paper explores deep learning to deliver efficient approximations to the AC-OPF.
arXiv Detail & Related papers (2020-06-29T20:22:16Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Toward fast and accurate human pose estimation via soft-gated skip
connections [97.06882200076096]
This paper is on highly accurate and highly efficient human pose estimation.
We re-analyze this design choice in the context of improving both the accuracy and the efficiency over the state-of-the-art.
Our model achieves state-of-the-art results on the MPII and LSP datasets.
arXiv Detail & Related papers (2020-02-25T18:51:51Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.