B-PL-PINN: Stabilizing PINN Training with Bayesian Pseudo Labeling
- URL: http://arxiv.org/abs/2507.01714v1
- Date: Wed, 02 Jul 2025 13:44:31 GMT
- Title: B-PL-PINN: Stabilizing PINN Training with Bayesian Pseudo Labeling
- Authors: Kevin Innerebner, Franz M. Rohrhofer, Bernhard C. Geiger,
- Abstract summary: Training physics-informed neural networks (PINNs) for forward problems often suffers from severe convergence issues.<n>We suggest replacing the ensemble by a Bayesian PINN, and consensus by an evaluation of the PINN's posterior variance.<n>Our experiments show that this mathematically principled approach outperforms the ensemble on a set of benchmark problems.
- Score: 9.503773054285558
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training physics-informed neural networks (PINNs) for forward problems often suffers from severe convergence issues, hindering the propagation of information from regions where the desired solution is well-defined. Haitsiukevich and Ilin (2023) proposed an ensemble approach that extends the active training domain of each PINN based on i) ensemble consensus and ii) vicinity to (pseudo-)labeled points, thus ensuring that the information from the initial condition successfully propagates to the interior of the computational domain. In this work, we suggest replacing the ensemble by a Bayesian PINN, and consensus by an evaluation of the PINN's posterior variance. Our experiments show that this mathematically principled approach outperforms the ensemble on a set of benchmark problems and is competitive with PINN ensembles trained with combinations of Adam and LBFGS.
Related papers
- Plane-Wave Decomposition and Randomised Training; a Novel Path to Generalised PINNs for SHM [0.0]
We introduce a formulation of Physics-Informed Neural Networks (PINNs)<n>PINNs are based on learning the form of the Fourier decomposition, and a training methodology based on a spread of randomly chosen boundary conditions.
arXiv Detail & Related papers (2025-03-31T21:37:18Z) - SetPINNs: Set-based Physics-informed Neural Networks [31.193471532024407]
We introduce SetPINNs, a framework that effectively captures local dependencies.<n>We partition the domain into sets to model local dependencies while simultaneously enforcing physical laws.<n>Experiments on synthetic and real-world tasks show improved accuracy, efficiency, and robustness.
arXiv Detail & Related papers (2024-09-30T11:41:58Z) - Rethinking Clustered Federated Learning in NOMA Enhanced Wireless
Networks [60.09912912343705]
This study explores the benefits of integrating the novel clustered federated learning (CFL) approach with non-independent and identically distributed (non-IID) datasets.
A detailed theoretical analysis of the generalization gap that measures the degree of non-IID in the data distribution is presented.
Solutions to address the challenges posed by non-IID conditions are proposed with the analysis of the properties.
arXiv Detail & Related papers (2024-03-05T17:49:09Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Improving Neural Additive Models with Bayesian Principles [54.29602161803093]
Neural additive models (NAMs) enhance the transparency of deep neural networks by handling calibrated input features in separate additive sub-networks.
We develop Laplace-approximated NAMs (LA-NAMs) which show improved empirical performance on datasets and challenging real-world medical tasks.
arXiv Detail & Related papers (2023-05-26T13:19:15Z) - Federated Compositional Deep AUC Maximization [58.25078060952361]
We develop a novel federated learning method for imbalanced data by directly optimizing the area under curve (AUC) score.
To the best of our knowledge, this is the first work to achieve such favorable theoretical results.
arXiv Detail & Related papers (2023-04-20T05:49:41Z) - Error convergence and engineering-guided hyperparameter search of PINNs:
towards optimized I-FENN performance [0.0]
We enhance the rigour and performance of I-FENN by focusing on two crucial aspects of its PINN component.
We introduce a systematic numerical approach based on a novel set of holistic performance metrics.
The proposed analysis can be directly extended to other applications in science and engineering.
arXiv Detail & Related papers (2023-03-03T17:39:06Z) - A unified scalable framework for causal sweeping strategies for
Physics-Informed Neural Networks (PINNs) and their temporal decompositions [22.514769448363754]
Training challenges in PINNs and XPINNs for time-dependent PDEs are discussed.
We propose a new stacked-decomposition method that bridges the gap between PINNs and XPINNs.
We also formulate a new time-sweeping collocation point algorithm inspired by the previous PINNs causality.
arXiv Detail & Related papers (2023-02-28T01:19:21Z) - On the Generalization of PINNs outside the training domain and the
Hyperparameters influencing it [1.3927943269211593]
PINNs are Neural Network architectures trained to emulate solutions of differential equations without the necessity of solution data.
We perform an empirical analysis of the behavior of PINN predictions outside their training domain.
We assess whether the algorithmic setup of PINNs can influence their potential for generalization and showcase the respective effect on the prediction.
arXiv Detail & Related papers (2023-02-15T09:51:56Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Belief Propagation Neural Networks [103.97004780313105]
We introduce belief propagation neural networks (BPNNs)
BPNNs operate on factor graphs and generalize Belief propagation (BP)
We show that BPNNs converges 1.7x faster on Ising models while providing tighter bounds.
On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods.
arXiv Detail & Related papers (2020-07-01T07:39:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.