Neural Network-based Power Flow Model
- URL: http://arxiv.org/abs/2112.08418v1
- Date: Wed, 15 Dec 2021 19:05:53 GMT
- Title: Neural Network-based Power Flow Model
- Authors: Thuan Pham, Xingpeng Li
- Abstract summary: A neural network (NN) model is trained to predict power flow results using historical power system data.
It can be concluded that the proposed NN-based power flow model can find solutions quickly and more accurately than DC power flow model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Power flow analysis is used to evaluate the flow of electricity in the power
system network. Power flow calculation is used to determine the steady-state
variables of the system, such as the voltage magnitude /phase angle of each bus
and the active/reactive power flow on each branch. The DC power flow model is a
popular linear power flow model that is widely used in the power industry.
Although it is fast and robust, it may lead to inaccurate line flow results for
some critical transmission lines. This drawback can be partially addressed by
data-driven methods that take advantage of historical grid profiles. In this
paper, a neural network (NN) model is trained to predict power flow results
using historical power system data. Although the training process may take
time, once trained, it is very fast to estimate line flows. A comprehensive
performance analysis between the proposed NN-based power flow model and the
traditional DC power flow model is conducted. It can be concluded that the
proposed NN-based power flow model can find solutions quickly and more
accurately than DC power flow model.
Related papers
- PowerFlowNet: Power Flow Approximation Using Message Passing Graph
Neural Networks [2.450802099490248]
Graph Neural Networks (GNNs) have emerged as a promising approach for improving the accuracy and speed of power flow approximations.
In this study, we introduce PowerFlowNet, a novel GNN architecture for PF approximation that showcases similar performance with the traditional Newton-Raphson method.
It significantly outperforms other traditional approximation methods, such as the DC relaxation method, in terms of performance and execution time.
arXiv Detail & Related papers (2023-11-06T09:44:00Z) - Graph Neural Network-based Power Flow Model [0.42970700836450487]
A graph neural network (GNN) model is trained using historical power system data to predict power flow outcomes.
A comprehensive performance analysis is conducted, comparing the proposed GNN-based power flow model with the traditional DC power flow model.
arXiv Detail & Related papers (2023-07-05T06:09:25Z) - Data-Driven Chance Constrained AC-OPF using Hybrid Sparse Gaussian
Processes [57.70237375696411]
The paper proposes a fast data-driven setup that uses the sparse and hybrid Gaussian processes (GP) framework to model the power flow equations with input uncertainty.
We advocate the efficiency of the proposed approach by a numerical study over multiple IEEE test cases showing up to two times faster and more accurate solutions.
arXiv Detail & Related papers (2022-08-30T09:27:59Z) - Multi-fidelity power flow solver [0.0]
The proposed model comprises two networks -- the first one trained on DC approximation as low-fidelity data and the second one trained on both low- and high-fidelity power flow data.
We tested the model on 14- and 118-bus test cases and evaluated its performance based on the $n-k$ power flow prediction accuracy with respect to imbalanced contingency data and high-to-low-fidelity sample ratio.
arXiv Detail & Related papers (2022-05-26T13:43:26Z) - Solving AC Power Flow with Graph Neural Networks under Realistic
Constraints [3.114162328765758]
We propose a graph neural network architecture to solve the AC power flow problem under realistic constraints.
In our approach, we demonstrate the development of a framework that uses graph neural networks to learn the physical constraints of the power flow.
arXiv Detail & Related papers (2022-04-14T14:49:34Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - Principal Component Density Estimation for Scenario Generation Using
Normalizing Flows [62.997667081978825]
We propose a dimensionality-reducing flow layer based on the linear principal component analysis (PCA) that sets up the normalizing flow in a lower-dimensional space.
We train the resulting principal component flow (PCF) on data of PV and wind power generation as well as load demand in Germany in the years 2013 to 2015.
arXiv Detail & Related papers (2021-04-21T08:42:54Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.