Physics-informed neural networks for solving parametric magnetostatic
problems
- URL: http://arxiv.org/abs/2202.04041v1
- Date: Tue, 8 Feb 2022 18:12:26 GMT
- Title: Physics-informed neural networks for solving parametric magnetostatic
problems
- Authors: Andr\'es Beltr\'an-Pulido, Ilias Bilionis, Dionysios Aliprantis
- Abstract summary: This paper investigates the ability of physics-informed neural networks to learn the magnetic field response as a function of design parameters.
We use a deep neural network (DNN) to represent the magnetic field as a function of space and a total of ten parameters.
- Score: 0.45119235878273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The optimal design of magnetic devices becomes intractable using current
computational methods when the number of design parameters is high. The
emerging physics-informed deep learning framework has the potential to
alleviate this curse of dimensionality. The objective of this paper is to
investigate the ability of physics-informed neural networks to learn the
magnetic field response as a function of design parameters in the context of a
two-dimensional (2-D) magnetostatic problem. Our approach is as follows. We
derive the variational principle for 2-D parametric magnetostatic problems, and
prove the existence and uniqueness of the solution that satisfies the equations
of the governing physics, i.e., Maxwell's equations. We use a deep neural
network (DNN) to represent the magnetic field as a function of space and a
total of ten parameters that describe geometric features and operating point
conditions. We train the DNN by minimizing the physics-informed loss function
using a variant of stochastic gradient descent. Subsequently, we conduct
systematic numerical studies using a parametric EI-core electromagnet problem.
In these studies, we vary the DNN architecture trying more than one hundred
different possibilities. For each study, we evaluate the accuracy of the DNN by
comparing its predictions to those of finite element analysis. In an exhaustive
non-parametric study, we observe that sufficiently parameterized dense networks
result in relative errors of less than 1%. Residual connections always improve
relative errors for the same number of training iterations. Also, we observe
that Fourier encoding features aligned with the device geometry do improve the
rate of convergence, albeit higher-order harmonics are not necessary. Finally,
we demonstrate our approach on a ten-dimensional problem with parameterized
geometry.
Related papers
- Grad-Shafranov equilibria via data-free physics informed neural networks [0.0]
We show that PINNs can accurately and effectively solve the Grad-Shafranov equation with several different boundary conditions.
We introduce a parameterized PINN framework, expanding the input space to include variables such as pressure, aspect ratio, elongation, and triangularity.
arXiv Detail & Related papers (2023-11-22T16:08:38Z) - Learning the solution operator of two-dimensional incompressible
Navier-Stokes equations using physics-aware convolutional neural networks [68.8204255655161]
We introduce a technique with which it is possible to learn approximate solutions to the steady-state Navier--Stokes equations in varying geometries without the need of parametrization.
The results of our physics-aware CNN are compared to a state-of-the-art data-based approach.
arXiv Detail & Related papers (2023-08-04T05:09:06Z) - Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains [0.0]
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems.
Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy.
In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations.
arXiv Detail & Related papers (2023-02-09T21:56:59Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - PI-VAE: Physics-Informed Variational Auto-Encoder for stochastic
differential equations [2.741266294612776]
We propose a new class of physics-informed neural networks, called physics-informed Variational Autoencoder (PI-VAE)
PI-VAE consists of a variational autoencoder (VAE), which generates samples of system variables and parameters.
The satisfactory accuracy and efficiency of the proposed method are numerically demonstrated in comparison with physics-informed generative adversarial network (PI-WGAN)
arXiv Detail & Related papers (2022-03-21T21:51:19Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Learning in Sinusoidal Spaces with Physics-Informed Neural Networks [22.47355575565345]
A physics-informed neural network (PINN) uses physics-augmented loss functions to ensure its output is consistent with fundamental physics laws.
It turns out to be difficult to train an accurate PINN model for many problems in practice.
arXiv Detail & Related papers (2021-09-20T07:42:41Z) - Simultaneous boundary shape estimation and velocity field de-noising in
Magnetic Resonance Velocimetry using Physics-informed Neural Networks [70.7321040534471]
Magnetic resonance velocimetry (MRV) is a non-invasive technique widely used in medicine and engineering to measure the velocity field of a fluid.
Previous studies have required the shape of the boundary (for example, a blood vessel) to be known a priori.
We present a physics-informed neural network that instead uses the noisy MRV data alone to infer the most likely boundary shape and de-noised velocity field.
arXiv Detail & Related papers (2021-07-16T12:56:09Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Extreme Theory of Functional Connections: A Physics-Informed Neural
Network Method for Solving Parametric Differential Equations [0.0]
We present a physics-informed method for solving problems involving parametric differential equations (DEs) called X-TFC.
X-TFC differs from PINN and Deep-TFC; whereas PINN and Deep-TFC use a deep-NN, X-TFC uses a single-layer NN, or more precisely, an Extreme Learning Machine, ELM.
arXiv Detail & Related papers (2020-05-15T22:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.