Dual-Balancing for Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2505.11117v3
- Date: Tue, 20 May 2025 01:56:21 GMT
- Title: Dual-Balancing for Physics-Informed Neural Networks
- Authors: Chenhong Zhou, Jie Chen, Zaifeng Yang, Ching Eng Png,
- Abstract summary: Physics-informed neural networks (PINNs) have emerged as a new learning paradigm for solving partial differential equations (PDEs)<n> PINNs still suffer from poor accuracy and slow convergence due to the intractable multi-objective optimization issue.<n>We propose a novel Dual-Balanced PINN (DB-PINN), which dynamically adjusts loss weights by integrating inter-balancing and intra-balancing.
- Score: 5.8096456298528745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed neural networks (PINNs) have emerged as a new learning paradigm for solving partial differential equations (PDEs) by enforcing the constraints of physical equations, boundary conditions (BCs), and initial conditions (ICs) into the loss function. Despite their successes, vanilla PINNs still suffer from poor accuracy and slow convergence due to the intractable multi-objective optimization issue. In this paper, we propose a novel Dual-Balanced PINN (DB-PINN), which dynamically adjusts loss weights by integrating inter-balancing and intra-balancing to alleviate two imbalance issues in PINNs. Inter-balancing aims to mitigate the gradient imbalance between PDE residual loss and condition-fitting losses by determining an aggregated weight that offsets their gradient distribution discrepancies. Intra-balancing acts on condition-fitting losses to tackle the imbalance in fitting difficulty across diverse conditions. By evaluating the fitting difficulty based on the loss records, intra-balancing can allocate the aggregated weight proportionally to each condition loss according to its fitting difficulty level. We further introduce a robust weight update strategy to prevent abrupt spikes and arithmetic overflow in instantaneous weight values caused by large loss variances, enabling smooth weight updating and stable training. Extensive experiments demonstrate that DB-PINN achieves significantly superior performance than those popular gradient-based weighting methods in terms of convergence speed and prediction accuracy. Our code and supplementary material are available at https://github.com/chenhong-zhou/DualBalanced-PINNs.
Related papers
- Convolution-weighting method for the physics-informed neural network: A Primal-Dual Optimization Perspective [14.65008276932511]
Physics-informed neural networks (PINNs) are extensively employed to solve partial differential equations (PDEs)<n>PINNs are typically optimized using a finite set of points, which poses significant challenges in guaranteeing their convergence and accuracy.<n>We propose a new weighting scheme that will adaptively change the weights to the loss functions from isolated points to their continuous neighborhood regions.
arXiv Detail & Related papers (2025-06-24T17:13:51Z) - Accuracy and Robustness of Weight-Balancing Methods for Training PINNs [0.06906005491572399]
We introduce clear definitions of accuracy and robustness in the context of PINNs.<n>We propose a novel training algorithm based on the Primal-Dual (PD) optimization framework.<n>Our approach enhances the robustness of PINNs while maintaining comparable performance to existing weight-balancing methods.
arXiv Detail & Related papers (2025-01-30T18:54:22Z) - Self-adaptive weights based on balanced residual decay rate for physics-informed neural networks and deep operator networks [1.0562108865927007]
Physics-informed deep learning has emerged as a promising alternative for solving partial differential equations.
For complex problems, training these networks can still be challenging, often resulting in unsatisfactory accuracy and efficiency.
We propose a point-wise adaptive weighting method that balances the residual decay rate across different training points.
arXiv Detail & Related papers (2024-06-28T00:53:48Z) - Fast Adversarial Training with Smooth Convergence [51.996943482875366]
We analyze the training process of prior Fast adversarial training (FAT) work and observe that catastrophic overfitting is accompanied by the appearance of loss convergence outliers.
To obtain a smooth loss convergence process, we propose a novel oscillatory constraint (dubbed ConvergeSmooth) to limit the loss difference between adjacent epochs.
Our proposed methods are attack-agnostic and thus can improve the training stability of various FAT techniques.
arXiv Detail & Related papers (2023-08-24T15:28:52Z) - Enhancing Convergence Speed with Feature-Enforcing Physics-Informed Neural Networks: Utilizing Boundary Conditions as Prior Knowledge for Faster Convergence [0.0]
This study introduces an accelerated training method for Vanilla Physics-Informed-Neural-Networks (PINN)
It addresses three factors that imbalance the loss function: initial weight state of a neural network, domain to boundary points ratio, and loss weighting factor.
It is found that incorporating weights generated in the first training phase into the structure of a neural network neutralizes the effects of imbalance factors.
arXiv Detail & Related papers (2023-08-17T09:10:07Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - Enhanced Physics-Informed Neural Networks with Augmented Lagrangian
Relaxation Method (AL-PINNs) [1.7403133838762446]
Physics-Informed Neural Networks (PINNs) are powerful approximators of solutions to nonlinear partial differential equations (PDEs)
We propose an Augmented Lagrangian relaxation method for PINNs (AL-PINNs)
We demonstrate through various numerical experiments that AL-PINNs yield a much smaller relative error compared with that of state-of-the-art adaptive loss-balancing algorithms.
arXiv Detail & Related papers (2022-04-29T08:33:11Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Multi-Objective Loss Balancing for Physics-Informed Deep Learning [0.0]
We observe the role of correctly weighting the combination of multiple competitive loss functions for training PINNs effectively.
We propose a novel self-adaptive loss balancing of PINNs called ReLoBRaLo.
Our simulation studies show that ReLoBRaLo training is much faster and achieves higher accuracy than training PINNs with other balancing methods.
arXiv Detail & Related papers (2021-10-19T09:00:12Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.