An invariance constrained deep learning network for PDE discovery
- URL: http://arxiv.org/abs/2402.03747v1
- Date: Tue, 6 Feb 2024 06:28:17 GMT
- Title: An invariance constrained deep learning network for PDE discovery
- Authors: Chao Chen, Hui Li, Xiaowei Jin
- Abstract summary: In this study, we propose an invariance constrained deep learning network (ICNet) for the discovery of partial differential equations (PDEs)
We embedded the fixed and possible terms into the loss function of neural network, significantly countering the effect of sparse data with high noise.
We select the 2D Burgers equation, the equation of 2D channel flow over an obstacle, and the equation of 3D intracranial aneurysm as examples to verify the superiority of the ICNet for fluid mechanics.
- Score: 7.7669872521725525
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The discovery of partial differential equations (PDEs) from datasets has
attracted increased attention. However, the discovery of governing equations
from sparse data with high noise is still very challenging due to the
difficulty of derivatives computation and the disturbance of noise. Moreover,
the selection principles for the candidate library to meet physical laws need
to be further studied. The invariance is one of the fundamental laws for
governing equations. In this study, we propose an invariance constrained deep
learning network (ICNet) for the discovery of PDEs. Considering that temporal
and spatial translation invariance (Galilean invariance) is a fundamental
property of physical laws, we filter the candidates that cannot meet the
requirement of the Galilean transformations. Subsequently, we embedded the
fixed and possible terms into the loss function of neural network,
significantly countering the effect of sparse data with high noise. Then, by
filtering out redundant terms without fixing learnable parameters during the
training process, the governing equations discovered by the ICNet method can
effectively approximate the real governing equations. We select the 2D Burgers
equation, the equation of 2D channel flow over an obstacle, and the equation of
3D intracranial aneurysm as examples to verify the superiority of the ICNet for
fluid mechanics. Furthermore, we extend similar invariance methods to the
discovery of wave equation (Lorentz Invariance) and verify it through Single
and Coupled Klein-Gordon equation. The results show that the ICNet method with
physical constraints exhibits excellent performance in governing equations
discovery from sparse and noisy data.
Related papers
- Characteristic Performance Study on Solving Oscillator ODEs via Soft-constrained Physics-informed Neural Network with Small Data [6.3295494018089435]
This paper compares physics-informed neural network (PINN), conventional neural network (NN) and traditional numerical discretization methods on solving differential equations (DEs)
We focus on the soft-constrained PINN approach and formalized its mathematical framework and computational flow for solving Ordinary DEs and Partial DEs.
We demonstrate that the DeepXDE-based implementation of PINN is not only light code and efficient in training, but also flexible across CPU/GPU platforms.
arXiv Detail & Related papers (2024-08-19T13:02:06Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains [0.0]
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems.
Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy.
In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations.
arXiv Detail & Related papers (2023-02-09T21:56:59Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Discovering Governing Equations by Machine Learning implemented with
Invariance [9.014669470289965]
This paper proposes GSNN (Galileo Neural Network) and LSNN (Lorentz Symbolic Neural Network) as learning discovery methods for control equations.
The adoption of mandatory embedding of physical constraints is fundamentally different from PINN in the form of the loss function.
It shows that the method presented in this study has better accuracy, parsimony, and interpretability.
arXiv Detail & Related papers (2022-03-29T14:01:03Z) - Simultaneous boundary shape estimation and velocity field de-noising in
Magnetic Resonance Velocimetry using Physics-informed Neural Networks [70.7321040534471]
Magnetic resonance velocimetry (MRV) is a non-invasive technique widely used in medicine and engineering to measure the velocity field of a fluid.
Previous studies have required the shape of the boundary (for example, a blood vessel) to be known a priori.
We present a physics-informed neural network that instead uses the noisy MRV data alone to infer the most likely boundary shape and de-noised velocity field.
arXiv Detail & Related papers (2021-07-16T12:56:09Z) - Unsupervised Learning of Solutions to Differential Equations with
Generative Adversarial Networks [1.1470070927586016]
We develop a novel method for solving differential equations with unsupervised neural networks.
We show that our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors.
arXiv Detail & Related papers (2020-07-21T23:36:36Z) - Solving inverse-PDE problems with physics-aware neural networks [0.0]
We propose a novel framework to find unknown fields in the context of inverse problems for partial differential equations.
We blend the high expressibility of deep neural networks as universal function estimators with the accuracy and reliability of existing numerical algorithms.
arXiv Detail & Related papers (2020-01-10T18:46:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.