MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale
Training of Physics-informed Neural Networks
- URL: http://arxiv.org/abs/2306.02816v1
- Date: Mon, 5 Jun 2023 12:12:59 GMT
- Title: MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale
Training of Physics-informed Neural Networks
- Authors: Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, Jun Zhu
- Abstract summary: Physics-informed Neural Networks (PINNs) have recently achieved remarkable progress in solving Partial Differential Equations (PDEs)
There are several critical challenges in the training of PINNs, including the lack of theoretical frameworks and the imbalance between PDE loss and boundary loss.
We present an analysis of second-order non-homogeneous PDEs, which are classified into three categories and applicable to various common problems.
- Score: 29.598874158082804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed Neural Networks (PINNs) have recently achieved remarkable
progress in solving Partial Differential Equations (PDEs) in various fields by
minimizing a weighted sum of PDE loss and boundary loss. However, there are
several critical challenges in the training of PINNs, including the lack of
theoretical frameworks and the imbalance between PDE loss and boundary loss. In
this paper, we present an analysis of second-order non-homogeneous PDEs, which
are classified into three categories and applicable to various common problems.
We also characterize the connections between the training loss and actual
error, guaranteeing convergence under mild conditions. The theoretical analysis
inspires us to further propose MultiAdam, a scale-invariant optimizer that
leverages gradient momentum to parameter-wisely balance the loss terms.
Extensive experiment results on multiple problems from different physical
domains demonstrate that our MultiAdam solver can improve the predictive
accuracy by 1-2 orders of magnitude compared with strong baselines.
Related papers
- Dual Cone Gradient Descent for Training Physics-Informed Neural Networks [0.0]
Physics-informed dual neural networks (PINNs) have emerged as a prominent approach for solving partial differential equations.
We propose a novel framework, Dual Cone Gradient Descent (DCGD), which adjusts the direction of the updated gradient to ensure it falls within a cone region.
arXiv Detail & Related papers (2024-09-27T03:27:46Z) - General-Kindred Physics-Informed Neural Network to the Solutions of Singularly Perturbed Differential Equations [11.121415128908566]
We propose the General-Kindred Physics-Informed Neural Network (GKPINN) for solving Singular Perturbation Differential Equations (SPDEs)
This approach utilizes prior knowledge of the boundary layer from the equation and establishes a novel network to assist PINN in approxing the boundary layer.
The research findings underscore the exceptional performance of our novel approach, GKPINN, which delivers a remarkable enhancement in reducing the $L$ error by two to four orders of magnitude compared to the established PINN methodology.
arXiv Detail & Related papers (2024-08-27T02:03:22Z) - Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs [85.40198664108624]
We propose Codomain Attention Neural Operator (CoDA-NO) to solve multiphysics problems with PDEs.
CoDA-NO tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
We find CoDA-NO to outperform existing methods by over 36% on complex downstream tasks with limited data.
arXiv Detail & Related papers (2024-03-19T08:56:20Z) - Lie Point Symmetry and Physics Informed Networks [59.56218517113066]
We propose a loss function that informs the network about Lie point symmetries in the same way that PINN models try to enforce the underlying PDE through a loss function.
Our symmetry loss ensures that the infinitesimal generators of the Lie group conserve the PDE solutions.
Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs.
arXiv Detail & Related papers (2023-11-07T19:07:16Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Mitigating Learning Complexity in Physics and Equality Constrained
Artificial Neural Networks [0.9137554315375919]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
In PINNs, the residual form of the PDE of interest and its boundary conditions are lumped into a composite objective function as soft penalties.
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach when applied to different kinds of PDEs.
arXiv Detail & Related papers (2022-06-19T04:12:01Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Parallel Physics-Informed Neural Networks with Bidirectional Balance [0.0]
physics-informed neural networks (PINNs) have been widely used to solve various partial differential equations (PDEs) in engineering.
Here we take heat transfer problem in multilayer fabrics as a typical example.
We propose a parallel physics-informed neural networks with bidirectional balance.
Our approach makes the PINNs unsolvable problem solvable, and achieves excellent solving accuracy.
arXiv Detail & Related papers (2021-11-10T11:13:33Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Multi-Objective Loss Balancing for Physics-Informed Deep Learning [0.0]
We observe the role of correctly weighting the combination of multiple competitive loss functions for training PINNs effectively.
We propose a novel self-adaptive loss balancing of PINNs called ReLoBRaLo.
Our simulation studies show that ReLoBRaLo training is much faster and achieves higher accuracy than training PINNs with other balancing methods.
arXiv Detail & Related papers (2021-10-19T09:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.