Physics-Enforced Modeling for Insertion Loss of Transmission Lines by
Deep Neural Networks
- URL: http://arxiv.org/abs/2107.12527v1
- Date: Tue, 27 Jul 2021 00:22:10 GMT
- Title: Physics-Enforced Modeling for Insertion Loss of Transmission Lines by
Deep Neural Networks
- Authors: Liang Chen, Lesley Tan
- Abstract summary: We show that direct application of neural networks can lead to non-physics models with negative insertion loss.
One solution is to add a regulation term, which represents the passive condition, to the final loss function to enforce the negative quantity of insertion loss.
In the second method, a third-order expression is defined first, which ensures positiveness, to approximate the insertion loss.
- Score: 4.762000720968522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate data-driven parameterized modeling of insertion
loss for transmission lines with respect to design parameters. We first show
that direct application of neural networks can lead to non-physics models with
negative insertion loss. To mitigate this problem, we propose two deep learning
solutions. One solution is to add a regulation term, which represents the
passive condition, to the final loss function to enforce the negative quantity
of insertion loss. In the second method, a third-order polynomial expression is
defined first, which ensures positiveness, to approximate the insertion loss,
then DeepONet neural network structure, which was proposed recently for
function and system modeling, was employed to model the coefficients of
polynomials. The resulting neural network is applied to predict the
coefficients of the polynomial expression. The experimental results on an
open-sourced SI/PI database of a PCB design show that both methods can ensure
the positiveness for the insertion loss. Furthermore, both methods can achieve
similar prediction results, while the polynomial-based DeepONet method is
faster than DeepONet based method in training time.
Related papers
- SEF: A Method for Computing Prediction Intervals by Shifting the Error Function in Neural Networks [0.0]
SEF (Shifting the Error Function) method presented in this paper is a new method that belongs to this category of methods.
The proposed approach involves training a single neural network three times, thus generating an estimate along with the corresponding upper and lower bounds for a given problem.
This innovative process effectively produces PIs, resulting in a robust and efficient technique for uncertainty quantification.
arXiv Detail & Related papers (2024-09-08T19:46:45Z) - Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks [0.5827521884806072]
Large neural networks trained on large datasets have become the dominant paradigm in machine learning.
This thesis develops scalable methods to equip neural networks with model uncertainty.
arXiv Detail & Related papers (2024-04-29T23:38:58Z) - Efficient Bayesian inference using physics-informed invertible neural
networks for inverse problems [6.97393424359704]
We introduce an innovative approach for addressing Bayesian inverse problems through the utilization of physics-informed invertible neural networks (PI-INN)
The PI-INN offers a precise and efficient generative model for Bayesian inverse problems, yielding tractable posterior density estimates.
As a particular physics-informed deep learning model, the primary training challenge for PI-INN centers on enforcing the independence constraint.
arXiv Detail & Related papers (2023-04-25T03:17:54Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Finite Sample Identification of Wide Shallow Neural Networks with Biases [12.622813055808411]
The identification of the parameters of the network from finite samples of input-output pairs is often referred to as the emphteacher-student model
This paper fills the gap by providing constructive methods and theoretical guarantees of finite sample identification for such wider shallow networks with biases.
arXiv Detail & Related papers (2022-11-08T22:10:32Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Model Order Reduction based on Runge-Kutta Neural Network [0.0]
In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three simulation models.
For the model reconstruction step, two types of neural network architectures are compared: Multilayer Perceptron (MLP) and Runge-Kutta Neural Network (RKNN)
arXiv Detail & Related papers (2021-03-25T13:02:16Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.