Solving higher-order Lane-Emden-Fowler type equations using
physics-informed neural networks: benchmark tests comparing soft and hard
constraints
- URL: http://arxiv.org/abs/2307.07302v1
- Date: Fri, 14 Jul 2023 12:27:05 GMT
- Title: Solving higher-order Lane-Emden-Fowler type equations using
physics-informed neural networks: benchmark tests comparing soft and hard
constraints
- Authors: Hubert Baty
- Abstract summary: Physics-Informed Neural Networks (PINNs) are presented with the aim to solve higher-order ordinary differential equations (ODEs)
This deep-learning technique is successfully applied for solving different classes of singular ODEs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, numerical methods using Physics-Informed Neural Networks
(PINNs) are presented with the aim to solve higher-order ordinary differential
equations (ODEs). Indeed, this deep-learning technique is successfully applied
for solving different classes of singular ODEs, namely the well known
second-order Lane-Emden equations, third order-order Emden-Fowler equations,
and fourth-order Lane-Emden-Fowler equations. Two variants of PINNs technique
are considered and compared. First, a minimization procedure is used to
constrain the total loss function of the neural network, in which the equation
residual is considered with some weight to form a physics-based loss and added
to the training data loss that contains the initial/boundary conditions.
Second, a specific choice of trial solutions ensuring these conditions as hard
constraints is done in order to satisfy the differential equation, contrary to
the first variant based on training data where the constraints appear as soft
ones. Advantages and drawbacks of PINNs variants are highlighted.
Related papers
- Discovery of Quasi-Integrable Equations from traveling-wave data using the Physics-Informed Neural Networks [0.0]
PINNs are used to study vortex solutions in 2+1 dimensional nonlinear partial differential equations.
We consider PINNs with conservation laws (referred to as cPINNs), deformations of the initial profiles, and a friction approach to improve the identification's resolution.
arXiv Detail & Related papers (2024-10-23T08:29:13Z) - Improving PINNs By Algebraic Inclusion of Boundary and Initial Conditions [0.1874930567916036]
"AI for Science" aims to solve fundamental scientific problems using AI techniques.
In this work we explore the possibility of changing the model being trained from being just a neural network to being a non-linear transformation of it.
This reduces the number of terms in the loss function than the standard PINN losses.
arXiv Detail & Related papers (2024-07-30T11:19:48Z) - Augmented neural forms with parametric boundary-matching operators for solving ordinary differential equations [0.0]
This paper introduces a formalism for systematically crafting proper neural forms with boundary matches that are amenable to optimization.
It describes a novel technique for converting problems with Neumann or Robin conditions into equivalent problems with parametric Dirichlet conditions.
The proposed augmented neural forms approach was tested on a set of diverse problems, encompassing first- and second-order ordinary differential equations, as well as first-order systems.
arXiv Detail & Related papers (2024-04-30T11:10:34Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Solving differential equations using physics informed deep learning: a
hand-on tutorial with benchmark tests [0.0]
We revisit the original approach of using deep learning and neural networks to solve differential equations.
We focus on the possibility to use the least possible amount of data into the training process.
A tutorial on a simple equation model illustrates how to put into practice the method for ordinary differential equations.
arXiv Detail & Related papers (2023-02-23T16:08:39Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - DEQGAN: Learning the Loss Function for PINNs with Generative Adversarial
Networks [1.0499611180329804]
This work presents Differential Equation GAN (DEQGAN), a novel method for solving differential equations using generative adversarial networks.
We show that DEQGAN achieves multiple orders of magnitude lower mean squared errors than PINNs.
We also show that DEQGAN achieves solution accuracies that are competitive with popular numerical methods.
arXiv Detail & Related papers (2022-09-15T06:39:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.