Calibrated Physics-Informed Uncertainty Quantification
- URL: http://arxiv.org/abs/2502.04406v2
- Date: Tue, 10 Jun 2025 16:38:08 GMT
- Title: Calibrated Physics-Informed Uncertainty Quantification
- Authors: Vignesh Gopakumar, Ander Gray, Lorenzo Zanisi, Timothy Nunn, Daniel Giles, Matt J. Kusner, Stanislas Pamela, Marc Peter Deisenroth,
- Abstract summary: We introduce a model-agnostic, physics-informed conformal prediction framework.<n>This framework provides guaranteed uncertainty estimates without requiring labelled data.<n>We further validate our method on neural PDE models for plasma modelling and shot design in fusion reactors.
- Score: 16.985414812517252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulating complex physical systems is crucial for understanding and predicting phenomena across diverse fields, such as fluid dynamics and heat transfer, as well as plasma physics and structural mechanics. Traditional approaches rely on solving partial differential equations (PDEs) using numerical methods, which are computationally expensive and often prohibitively slow for real-time applications or large-scale simulations. Neural PDEs have emerged as efficient alternatives to these costly numerical solvers, offering significant computational speed-ups. However, their lack of robust uncertainty quantification (UQ) limits deployment in critical applications. We introduce a model-agnostic, physics-informed conformal prediction (CP) framework that provides guaranteed uncertainty estimates without requiring labelled data. By utilising a physics-based approach, we can quantify and calibrate the model's inconsistencies with the physics rather than the uncertainty arising from the data. Our approach utilises convolutional layers as finite-difference stencils and leverages physics residual errors as nonconformity scores, enabling data-free UQ with marginal and joint coverage guarantees across prediction domains for a range of complex PDEs. We further validate the efficacy of our method on neural PDE models for plasma modelling and shot design in fusion reactors.
Related papers
- Hybrid Generative Modeling for Incomplete Physics: Deep Grey-Box Meets Optimal Transport [48.06072022424773]
Many real-world systems are described only approximately with missing or unknown terms in the equations.<n>This makes the distribution of the physics model differ from the true data-generating process (DGP)<n>We present a novel hybrid generative model approach combining deep grey-box modelling with Optimal Transport (OT) methods to enhance incomplete physics models.
arXiv Detail & Related papers (2025-06-27T13:23:27Z) - Flow Matching Meets PDEs: A Unified Framework for Physics-Constrained Generation [21.321570407292263]
We propose Physics-Based Flow Matching, a generative framework that embeds physical constraints, both PDE residuals and algebraic relations, into the flow matching objective.<n>We show that our approach yields up to an $8times$ more accurate physical residuals compared to FM, while clearly outperforming existing algorithms in terms of distributional accuracy.
arXiv Detail & Related papers (2025-06-10T09:13:37Z) - EquiNO: A Physics-Informed Neural Operator for Multiscale Simulations [0.8345452787121658]
We propose EquiNO as a $textitcomplementary$ physics-informed PDE surrogate for predicting microscale physics.<n>Our framework, applicable to the so-called multiscale FE$,2,$ computations, introduces the FE-OL approach by integrating the finite element (FE) method with operator learning (OL)
arXiv Detail & Related papers (2025-03-27T08:42:13Z) - Paving the way for scientific foundation models: enhancing generalization and robustness in PDEs with constraint-aware pre-training [49.8035317670223]
A scientific foundation model (SciFM) is emerging as a promising tool for learning transferable representations across diverse domains.
We propose incorporating PDE residuals into pre-training either as the sole learning signal or in combination with data loss to compensate for limited or infeasible training data.
Our results show that pre-training with PDE constraints significantly enhances generalization, outperforming models trained solely on solution data.
arXiv Detail & Related papers (2025-03-24T19:12:39Z) - MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)<n>In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.<n>A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Gradient-Free Generation for Hard-Constrained Systems [41.558608119074755]
Existing constrained generative models rely heavily on gradient information, which is often sparse or computationally expensive in some fields.<n>We introduce a novel framework for adapting pre-trained, unconstrained flow-matching models to satisfy constraints exactly in a zero-shot manner.
arXiv Detail & Related papers (2024-12-02T18:36:26Z) - Adaptation of uncertainty-penalized Bayesian information criterion for parametric partial differential equation discovery [1.1049608786515839]
We introduce an extension of the uncertainty-penalized Bayesian information criterion (UBIC) to solve parametric PDE discovery problems efficiently.
UBIC uses quantified PDE uncertainty over different temporal or spatial points to prevent overfitting in model selection.
We show that our extended UBIC can identify the true number of terms and their varying coefficients accurately, even in the presence of noise.
arXiv Detail & Related papers (2024-08-15T12:10:50Z) - Physics-Aware Neural Implicit Solvers for multiscale, parametric PDEs with applications in heterogeneous media [1.8416014644193066]
We propose a novel, data-driven framework for learning surrogates for parametrized Partial Differential Equations (PDEs)
It consists of a probabilistic, learning objective in which weighted residuals are used to probe the PDE and provide a source of em virtual data i.e. the actual PDE never needs to be solved.
This is combined with a physics-aware implicit solver that consists of a much coarser, discretized version of the original PDE.
arXiv Detail & Related papers (2024-05-29T12:01:49Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Physics-constrained polynomial chaos expansion for scientific machine learning and uncertainty quantification [6.739642016124097]
We present a novel physics-constrained chaos expansion as a surrogate modeling method capable of performing both scientific machine learning (SciML) and uncertainty quantification (UQ) tasks.
The proposed method seamlessly integrates SciML into UQ and vice versa, which allows it to quantify the uncertainties in SciML tasks effectively and leverage SciML for improved uncertainty assessment during UQ-related tasks.
arXiv Detail & Related papers (2024-02-23T06:04:15Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - A Posteriori Evaluation of a Physics-Constrained Neural Ordinary
Differential Equations Approach Coupled with CFD Solver for Modeling Stiff
Chemical Kinetics [4.125745341349071]
We extend the NeuralODE framework for stiff chemical kinetics by incorporating mass conservation constraints directly into the loss function during training.
This ensures that the total mass and the elemental mass are conserved, a critical requirement for reliable downstream integration with CFD solvers.
arXiv Detail & Related papers (2023-11-22T22:40:49Z) - Physics-Informed Polynomial Chaos Expansions [7.5746822137722685]
This paper presents a novel methodology for the construction of physics-informed expansions (PCE)
A computationally efficient means for physically constrained PCE is proposed and compared to standard sparse PCE.
We show that the constrained PCEs can be easily applied for uncertainty through analytical post-processing.
arXiv Detail & Related papers (2023-09-04T16:16:34Z) - Adaptive Uncertainty-Guided Model Selection for Data-Driven PDE
Discovery [3.065513003860786]
We propose a new parameter-adaptive uncertainty-penalized Bayesian information criterion (UBIC) to prioritize the parsimonious partial differential equation (PDE)
We numerically affirm the successful application of the UBIC in identifying the true governing PDE.
We reveal an interesting effect of denoising the observed data on improving the trade-off between the BIC score and model complexity.
arXiv Detail & Related papers (2023-08-20T14:36:45Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Bayesian neural networks for weak solution of PDEs with uncertainty
quantification [3.4773470589069473]
A new physics-constrained neural network (NN) approach is proposed to solve PDEs without labels.
We write the loss function of NNs based on the discretized residual of PDEs through an efficient, convolutional operator-based, and vectorized implementation.
We demonstrate the capability and performance of the proposed framework by applying it to steady-state diffusion, linear elasticity, and nonlinear elasticity.
arXiv Detail & Related papers (2021-01-13T04:57:51Z) - APIK: Active Physics-Informed Kriging Model with Partial Differential
Equations [6.918364447822299]
We present a PDE Informed Kriging model (PIK), which introduces PDE information via a set of PDE points and conducts posterior prediction similar to the standard kriging method.
To further improve learning performance, we propose an Active PIK framework (APIK) that designs PDE points to leverage the PDE information based on the PIK model and measurement data.
arXiv Detail & Related papers (2020-12-22T02:31:26Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.