Estimate Three-Phase Distribution Line Parameters With Physics-Informed
Graphical Learning Method
- URL: http://arxiv.org/abs/2102.09023v1
- Date: Wed, 17 Feb 2021 20:52:55 GMT
- Title: Estimate Three-Phase Distribution Line Parameters With Physics-Informed
Graphical Learning Method
- Authors: Wenyu Wang, Nanpeng Yu
- Abstract summary: We develop a physics-informed graphical learning algorithm to estimate network parameters of three-phase power distribution systems.
Our proposed algorithm uses only readily available smart meter data to estimate the three-phase series resistance and reactance of the primary distribution line segments.
- Score: 5.410616652484168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate estimates of network parameters are essential for modeling,
monitoring, and control in power distribution systems. In this paper, we
develop a physics-informed graphical learning algorithm to estimate network
parameters of three-phase power distribution systems. Our proposed algorithm
uses only readily available smart meter data to estimate the three-phase series
resistance and reactance of the primary distribution line segments. We first
develop a parametric physics-based model to replace the black-box deep neural
networks in the conventional graphical neural network (GNN). Then we derive the
gradient of the loss function with respect to the network parameters and use
stochastic gradient descent (SGD) to estimate the physical parameters. Prior
knowledge of network parameters is also considered to further improve the
accuracy of estimation. Comprehensive numerical study results show that our
proposed algorithm yields high accuracy and outperforms existing methods.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks [0.5827521884806072]
Large neural networks trained on large datasets have become the dominant paradigm in machine learning.
This thesis develops scalable methods to equip neural networks with model uncertainty.
arXiv Detail & Related papers (2024-04-29T23:38:58Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Equivariant geometric learning for digital rock physics: estimating
formation factor and effective permeability tensors from Morse graph [7.355408124040856]
We present a SE(3)-equivariant graph neural network (GNN) approach that directly predicts formation factor and effective permeability from micro-CT images.
FFT solvers are established to compute both the formation factor and effective permeability, while the topology and geometry of the pore space are represented by a persistence-based Morse graph.
arXiv Detail & Related papers (2021-04-12T16:28:25Z) - Multi-fidelity Bayesian Neural Networks: Algorithms and Applications [0.0]
We propose a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity.
We apply them to learn function approximations as well as to solve inverse problems based on partial differential equations (PDEs)
arXiv Detail & Related papers (2020-12-19T02:03:53Z) - Parameter Estimation with Dense and Convolutional Neural Networks
Applied to the FitzHugh-Nagumo ODE [0.0]
We present deep neural networks using dense and convolutional layers to solve an inverse problem, where we seek to estimate parameters of a Fitz-Nagumo model.
We demonstrate that deep neural networks have the potential to estimate parameters in dynamical models and processes, and they are capable of predicting parameters accurately for the framework.
arXiv Detail & Related papers (2020-12-12T01:20:42Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.