Which Neural Network to Choose for Post-Fault Localization, Dynamic
State Estimation and Optimal Measurement Placement in Power Systems?
- URL: http://arxiv.org/abs/2104.03115v1
- Date: Wed, 7 Apr 2021 13:35:55 GMT
- Title: Which Neural Network to Choose for Post-Fault Localization, Dynamic
State Estimation and Optimal Measurement Placement in Power Systems?
- Authors: Andrei Afonin and Michael Chertkov
- Abstract summary: We consider a power transmission system monitored with Phasor Measurement Units (PMUs) placed at significant, but not all, nodes of the system.
We first design a comprehensive sequence of Neural Networks (NNs) locating the faulty line.
Second, we build a sequence of advanced Power-System-Dynamics-Informed and Neural-ODE based Machine Learning schemes trained, given pre-fault state, to predict the post-fault state.
Third, and continuing to work with the first (fault localization) setting we design a (NN-based) algorithm which discovers optimal PMU placement.
- Score: 4.416484585765027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider a power transmission system monitored with Phasor Measurement
Units (PMUs) placed at significant, but not all, nodes of the system. Assuming
that a sufficient number of distinct single-line faults, specifically pre-fault
state and (not cleared) post-fault state, are recorded by the PMUs and are
available for training, we, first, design a comprehensive sequence of Neural
Networks (NNs) locating the faulty line. Performance of different NNs in the
sequence, including Linear Regression, Feed-Forward NN, AlexNet, Graphical
Convolutional NN, Neural Linear ODE and Neural Graph-based ODE, ordered
according to the type and amount of the power flow physics involved, are
compared for different levels of observability. Second, we build a sequence of
advanced Power-System-Dynamics-Informed and Neural-ODE based Machine Learning
schemes trained, given pre-fault state, to predict the post-fault state and
also, in parallel, to estimate system parameters. Finally, third, and
continuing to work with the first (fault localization) setting we design a
(NN-based) algorithm which discovers optimal PMU placement.
Related papers
- GradINN: Gradient Informed Neural Network [2.287415292857564]
We propose a methodology inspired by Physics Informed Neural Networks (PINNs)
GradINNs leverage prior beliefs about a system's gradient to constrain the predicted function's gradient across all input dimensions.
We demonstrate the advantages of GradINNs, particularly in low-data regimes, on diverse problems spanning non time-dependent systems.
arXiv Detail & Related papers (2024-09-03T14:03:29Z) - Physics-Informed Neural Network for Discovering Systems with
Unmeasurable States with Application to Lithium-Ion Batteries [6.375364752891239]
We introduce a robust method for training PINN that uses fewer loss terms and thus constructs a less complex landscape for optimization.
Instead of having loss terms from each differential equation, this method embeds the dynamics into a loss function that quantifies the error between observed and predicted system outputs.
This is accomplished by numerically integrating the predicted states from the neural network(NN) using known dynamics and transforming them to obtain a sequence of predicted outputs.
arXiv Detail & Related papers (2023-11-27T23:35:40Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - Neural net modeling of equilibria in NSTX-U [0.0]
We develop two neural networks relevant to equilibrium and shape control modeling.
Networks include Eqnet, a free-boundary equilibrium solver trained on the EFIT01 reconstruction algorithm, and Pertnet, which is trained on the Gspert code.
We report strong performance for both networks indicating that these models could reliably be used within closed-loop simulations.
arXiv Detail & Related papers (2022-02-28T16:09:58Z) - Physics-Informed Neural Nets-based Control [5.252190504926357]
This work presents a new framework called Physics-Informed Neural Nets-based Control (PINC)
PINC is amenable to control problems and able to simulate for longer-range time horizons that are not fixed beforehand.
We showcase our method in the control of two nonlinear dynamic systems.
arXiv Detail & Related papers (2021-04-06T14:55:23Z) - Constrained Block Nonlinear Neural Dynamical Models [1.3163098563588727]
Neural network modules conditioned by known priors can be effectively trained and combined to represent systems with nonlinear dynamics.
The proposed method consists of neural network blocks that represent input, state, and output dynamics with constraints placed on the network weights and system variables.
We evaluate the performance of the proposed architecture and training methods on system identification tasks for three nonlinear systems.
arXiv Detail & Related papers (2021-01-06T04:27:54Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.