Solving differential equations using physics informed deep learning: a
hand-on tutorial with benchmark tests
- URL: http://arxiv.org/abs/2302.12260v2
- Date: Tue, 4 Apr 2023 16:00:35 GMT
- Title: Solving differential equations using physics informed deep learning: a
hand-on tutorial with benchmark tests
- Authors: Hubert Baty, Leo Baty
- Abstract summary: We revisit the original approach of using deep learning and neural networks to solve differential equations.
We focus on the possibility to use the least possible amount of data into the training process.
A tutorial on a simple equation model illustrates how to put into practice the method for ordinary differential equations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We revisit the original approach of using deep learning and neural networks
to solve differential equations by incorporating the knowledge of the equation.
This is done by adding a dedicated term to the loss function during the
optimization procedure in the training process. The so-called physics-informed
neural networks (PINNs) are tested on a variety of academic ordinary
differential equations in order to highlight the benefits and drawbacks of this
approach with respect to standard integration methods. We focus on the
possibility to use the least possible amount of data into the training process.
The principles of PINNs for solving differential equations by enforcing
physical laws via penalizing terms are reviewed. A tutorial on a simple
equation model illustrates how to put into practice the method for ordinary
differential equations. Benchmark tests show that a very small amount of
training data is sufficient to predict the solution when the non linearity of
the problem is weak. However, this is not the case in strongly non linear
problems where a priori knowledge of training data over some partial or the
whole time integration interval is necessary.
Related papers
- Implementation and (Inverse Modified) Error Analysis for
implicitly-templated ODE-nets [0.0]
We focus on learning unknown dynamics from data using ODE-nets templated on implicit numerical initial value problem solvers.
We perform Inverse Modified error analysis of the ODE-nets using unrolled implicit schemes for ease of interpretation.
We formulate an adaptive algorithm which monitors the level of error and adapts the number of (unrolled) implicit solution iterations.
arXiv Detail & Related papers (2023-03-31T06:47:02Z) - Locally Regularized Neural Differential Equations: Some Black Boxes Were
Meant to Remain Closed! [3.222802562733787]
Implicit layer deep learning techniques, like Neural Differential Equations, have become an important modeling framework.
We develop two sampling strategies to trade off between performance and training time.
Our method reduces the number of function evaluations to 0.556-0.733x and accelerates predictions by 1.3-2x.
arXiv Detail & Related papers (2023-03-03T23:31:15Z) - Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains [0.0]
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems.
Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy.
In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations.
arXiv Detail & Related papers (2023-02-09T21:56:59Z) - Physics-guided Data Augmentation for Learning the Solution Operator of
Linear Differential Equations [2.1850269949775663]
We propose a physics-guided data augmentation (PGDA) method to improve the accuracy and generalization of neural operator models.
We demonstrate the advantage of PGDA on a variety of linear differential equations, showing that PGDA can improve the sample complexity and is robust to distributional shift.
arXiv Detail & Related papers (2022-12-08T06:29:15Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Computational characteristics of feedforward neural networks for solving
a stiff differential equation [0.0]
We study the solution of a simple but fundamental stiff ordinary differential equation modelling a damped system.
We show that it is possible to identify preferable choices to be made for parameters and methods.
Overall we extend the current literature in the field by showing what can be done in order to obtain reliable and accurate results by the neural network approach.
arXiv Detail & Related papers (2020-12-03T12:22:24Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - STEER: Simple Temporal Regularization For Neural ODEs [80.80350769936383]
We propose a new regularization technique: randomly sampling the end time of the ODE during training.
The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks.
We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.
arXiv Detail & Related papers (2020-06-18T17:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.