Multi-Objective Loss Balancing for Physics-Informed Deep Learning
- URL: http://arxiv.org/abs/2110.09813v1
- Date: Tue, 19 Oct 2021 09:00:12 GMT
- Title: Multi-Objective Loss Balancing for Physics-Informed Deep Learning
- Authors: Rafael Bischof, Michael Kraus
- Abstract summary: We observe the role of correctly weighting the combination of multiple competitive loss functions for training PINNs effectively.
We propose a novel self-adaptive loss balancing of PINNs called ReLoBRaLo.
Our simulation studies show that ReLoBRaLo training is much faster and achieves higher accuracy than training PINNs with other balancing methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics Informed Neural Networks (PINN) are algorithms from deep learning
leveraging physical laws by including partial differential equations (PDE)
together with a respective set of boundary and initial conditions (BC / IC) as
penalty terms into their loss function. As the PDE, BC and IC loss function
parts can significantly differ in magnitudes, due to their underlying physical
units or stochasticity of initialisation, training of PINNs may suffer from
severe convergence and efficiency problems, causing PINNs to stay beyond
desirable approximation quality. In this work, we observe the significant role
of correctly weighting the combination of multiple competitive loss functions
for training PINNs effectively. To that end, we implement and evaluate
different methods aiming at balancing the contributions of multiple terms of
the PINNs loss function and their gradients. After review of three existing
loss scaling approaches (Learning Rate Annealing, GradNorm as well as
SoftAdapt), we propose a novel self-adaptive loss balancing of PINNs called
ReLoBRaLo (Relative Loss Balancing with Random Lookback). Finally, the
performance of ReLoBRaLo is compared and verified against these approaches by
solving both forward as well as inverse problems on three benchmark PDEs for
PINNs: Burgers' equation, Kirchhoff's plate bending equation and Helmholtz's
equation. Our simulation studies show that ReLoBRaLo training is much faster
and achieves higher accuracy than training PINNs with other balancing methods
and hence is very effective and increases sustainability of PINNs algorithms.
The adaptability of ReLoBRaLo illustrates robustness across different PDE
problem settings. The proposed method can also be employed to the wider class
of penalised optimisation problems, including PDE-constrained and Sobolev
training apart from the studied PINNs examples.
Related papers
- Dual Cone Gradient Descent for Training Physics-Informed Neural Networks [0.0]
Physics-informed dual neural networks (PINNs) have emerged as a prominent approach for solving partial differential equations.
We propose a novel framework, Dual Cone Gradient Descent (DCGD), which adjusts the direction of the updated gradient to ensure it falls within a cone region.
arXiv Detail & Related papers (2024-09-27T03:27:46Z) - MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale
Training of Physics-informed Neural Networks [29.598874158082804]
Physics-informed Neural Networks (PINNs) have recently achieved remarkable progress in solving Partial Differential Equations (PDEs)
There are several critical challenges in the training of PINNs, including the lack of theoretical frameworks and the imbalance between PDE loss and boundary loss.
We present an analysis of second-order non-homogeneous PDEs, which are classified into three categories and applicable to various common problems.
arXiv Detail & Related papers (2023-06-05T12:12:59Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Enhanced Physics-Informed Neural Networks with Augmented Lagrangian
Relaxation Method (AL-PINNs) [1.7403133838762446]
Physics-Informed Neural Networks (PINNs) are powerful approximators of solutions to nonlinear partial differential equations (PDEs)
We propose an Augmented Lagrangian relaxation method for PINNs (AL-PINNs)
We demonstrate through various numerical experiments that AL-PINNs yield a much smaller relative error compared with that of state-of-the-art adaptive loss-balancing algorithms.
arXiv Detail & Related papers (2022-04-29T08:33:11Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - CAN-PINN: A Fast Physics-Informed Neural Network Based on
Coupled-Automatic-Numerical Differentiation Method [17.04611875126544]
Novel physics-informed neural network (PINN) methods for coupling neighboring support points and automatic differentiation (AD) through Taylor series expansion are proposed.
The proposed coupled-automatic-numerical differentiation framework, labeled as can-PINN, unifies the advantages of AD and ND, providing more robust and efficient training than AD-based PINNs.
arXiv Detail & Related papers (2021-10-29T14:52:46Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.