Prediction error certification for PINNs: Theory, computation, and application to Stokes flow
- URL: http://arxiv.org/abs/2508.07994v1
- Date: Mon, 11 Aug 2025 13:57:02 GMT
- Title: Prediction error certification for PINNs: Theory, computation, and application to Stokes flow
- Authors: Birgit Hillebrecht, Benjamin Unger,
- Abstract summary: Rigorous error estimation is a fundamental topic in numerical analysis.<n>With the increasing use of physics-informed neural networks (PINNs) for solving partial differential equations, several approaches have been developed to quantify the associated prediction error.<n>We build upon a semigroup-based framework previously introduced by the authors for estimating the PINN error.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rigorous error estimation is a fundamental topic in numerical analysis. With the increasing use of physics-informed neural networks (PINNs) for solving partial differential equations, several approaches have been developed to quantify the associated prediction error. In this work, we build upon a semigroup-based framework previously introduced by the authors for estimating the PINN error. While this estimator has so far been limited to academic examples - due to the need to compute quantities related to input-to-state stability - we extend its applicability to a significantly broader class of problems. This is accomplished by modifying the error bound and proposing numerical strategies to approximate the required stability parameters. The extended framework enables the certification of PINN predictions in more realistic scenarios, as demonstrated by a numerical study of Stokes flow around a cylinder.
Related papers
- Statistical Learning Analysis of Physics-Informed Neural Networks [0.0]
We study the training and performance of physics-informed learning for initial and boundary value problems with physics-informed neural networks (PINNs)<n>We use the so-called Local Learning Coefficient to analyze the estimates of PINN parameters obtained via optimization for a heat equation.
arXiv Detail & Related papers (2026-02-11T18:09:29Z) - Ising on the donut: Regimes of topological quantum error correction from statistical mechanics [0.0]
Utility-scale quantum computers require quantum error correcting codes with large numbers of physical qubits to achieve sufficiently low logical error rates.<n>Here we exploit an exact mapping, from a toric code under bit-flip noise that is post-selected on being syndrome free to the exactly-solvable two-dimensional Ising model on a torus, to derive an analytic solution for the logical failure rate.
arXiv Detail & Related papers (2025-12-11T08:06:23Z) - From Distributional to Quantile Neural Basis Models: the case of Electricity Price Forecasting [42.062078728472734]
We introduce the Quantile Neural Basis Model, which incorporates the interpretability principles of Quantile Generalized Additive Models.<n>We validate our approach on day-ahead electricity price forecasting, achieving predictive performance comparable to distributional and quantile regression neural networks.
arXiv Detail & Related papers (2025-09-17T15:55:59Z) - MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)<n>In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.<n>A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Semiparametric conformal prediction [79.6147286161434]
We construct a conformal prediction set accounting for the joint correlation structure of the vector-valued non-conformity scores.<n>We flexibly estimate the joint cumulative distribution function (CDF) of the scores.<n>Our method yields desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Scalable Subsampling Inference for Deep Neural Networks [0.0]
A non-asymptotic error bound has been developed to measure the performance of the fully connected DNN estimator.
A non-random subsampling technique--scalable subsampling--is applied to construct a subagged' DNN estimator.
The proposed confidence/prediction intervals appear to work well in finite samples.
arXiv Detail & Related papers (2024-05-14T02:11:38Z) - Using Uncertainty Quantification to Characterize and Improve Out-of-Domain Learning for PDEs [44.890946409769924]
Neural Operators (NOs) have emerged as particularly promising quantification.
We show that ensembling several NOs can identify high-error regions and provide good uncertainty estimates.
We then introduce Operator-ProbConserv, a method that uses these well-calibrated UQ estimates within the ProbConserv framework to update the model.
arXiv Detail & Related papers (2024-03-15T19:21:27Z) - Preconditioning for Physics-Informed Neural Networks [25.697465351286564]
We propose to use condition number as a metric to diagnose and mitigate the pathologies in PINNs.
We prove theorems to reveal how condition number is related to both the error control and convergence of PINNs.
We present an algorithm that leverages preconditioning to improve the condition number.
arXiv Detail & Related papers (2024-02-01T11:58:28Z) - Error estimation for physics-informed neural networks with implicit
Runge-Kutta methods [0.0]
In this work, we propose to use the NN's predictions in a high-order implicit Runge-Kutta (IRK) method.
The residuals in the implicit system of equations can be related to the NN's prediction error, hence, we can provide an error estimate at several points along a trajectory.
We find that this error estimate highly correlates with the NN's prediction error and that increasing the order of the IRK method improves this estimate.
arXiv Detail & Related papers (2024-01-10T15:18:56Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Analyzing Prospects for Quantum Advantage in Topological Data Analysis [35.423446067065576]
We analyze and optimize an improved quantum algorithm for topological data analysis.
We show that super-quadratic quantum speedups are only possible when targeting a multiplicative error approximation.
We argue that quantum circuits with tens of billions of Toffoli can solve seemingly classically intractable instances.
arXiv Detail & Related papers (2022-09-27T17:56:15Z) - Neural Estimation of Statistical Divergences [24.78742908726579]
A modern method for estimating statistical divergences relies on parametrizing an empirical variational form by a neural network (NN)
In particular, there is a fundamental tradeoff between the two sources of error involved: approximation and empirical estimation.
We show that neural estimators with a slightly different NN growth-rate are near minimax rate-optimal, achieving the parametric convergence rate up to logarithmic factors.
arXiv Detail & Related papers (2021-10-07T17:42:44Z) - Non-Asymptotic Performance Guarantees for Neural Estimation of
$\mathsf{f}$-Divergences [22.496696555768846]
Statistical distances quantify the dissimilarity between probability distributions.
A modern method for estimating such distances from data relies on parametrizing a variational form by a neural network (NN) and optimizing it.
This paper explores this tradeoff by means of non-asymptotic error bounds, focusing on three popular choices of SDs.
arXiv Detail & Related papers (2021-03-11T19:47:30Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.