Uncertainty Quantification for Machine Learning-Based Prediction: A Polynomial Chaos Expansion Approach for Joint Model and Input Uncertainty Propagation
- URL: http://arxiv.org/abs/2507.14782v1
- Date: Sun, 20 Jul 2025 01:47:50 GMT
- Title: Uncertainty Quantification for Machine Learning-Based Prediction: A Polynomial Chaos Expansion Approach for Joint Model and Input Uncertainty Propagation
- Authors: Xiaoping Du,
- Abstract summary: This paper presents a robust framework based on Polynomial Chaos Expansion (PCE) to handle joint input and model uncertainty propagation.<n>By transforming all random inputs into a unified standard space, a PCE surrogate model is constructed, allowing efficient and accurate calculation of the mean and standard deviation of the output.
- Score: 1.223779595809275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) surrogate models are increasingly used in engineering analysis and design to replace computationally expensive simulation models, significantly reducing computational cost and accelerating decision-making processes. However, ML predictions contain inherent errors, often estimated as model uncertainty, which is coupled with variability in model inputs. Accurately quantifying and propagating these combined uncertainties is essential for generating reliable engineering predictions. This paper presents a robust framework based on Polynomial Chaos Expansion (PCE) to handle joint input and model uncertainty propagation. While the approach applies broadly to general ML surrogates, we focus on Gaussian Process regression models, which provide explicit predictive distributions for model uncertainty. By transforming all random inputs into a unified standard space, a PCE surrogate model is constructed, allowing efficient and accurate calculation of the mean and standard deviation of the output. The proposed methodology also offers a mechanism for global sensitivity analysis, enabling the accurate quantification of the individual contributions of input variables and ML model uncertainty to the overall output variability. This approach provides a computationally efficient and interpretable framework for comprehensive uncertainty quantification, supporting trustworthy ML predictions in downstream engineering applications.
Related papers
- Internal Causal Mechanisms Robustly Predict Language Model Out-of-Distribution Behaviors [61.92704516732144]
We show that the most robust features for correctness prediction are those that play a distinctive causal role in the model's behavior.<n>We propose two methods that leverage causal mechanisms to predict the correctness of model outputs.
arXiv Detail & Related papers (2025-05-17T00:31:39Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
We present a unifying perspective on recent results on ridge regression.<n>We use the basic tools of random matrix theory and free probability, aimed at readers with backgrounds in physics and deep learning.<n>Our results extend and provide a unifying perspective on earlier models of scaling laws.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Conformal Approach To Gaussian Process Surrogate Evaluation With
Coverage Guarantees [47.22930583160043]
We propose a method for building adaptive cross-conformal prediction intervals.
The resulting conformal prediction intervals exhibit a level of adaptivity akin to Bayesian credibility sets.
The potential applicability of the method is demonstrated in the context of surrogate modeling of an expensive-to-evaluate simulator of the clogging phenomenon in steam generators of nuclear reactors.
arXiv Detail & Related papers (2024-01-15T14:45:18Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Polynomial Chaos Surrogate Construction for Random Fields with Parametric Uncertainty [0.0]
Surrogate models provide a means of circumventing the high computational expense of complex models.
We develop a PCE surrogate on a joint space of intrinsic and parametric uncertainty, enabled by Rosenblatt.
We then take advantage of closed-form solutions for computing PCE Sobol indices to perform a global sensitivity analysis of the model.
arXiv Detail & Related papers (2023-11-01T14:41:54Z) - Efficient Propagation of Uncertainty via Reordering Monte Carlo Samples [0.7087237546722617]
Uncertainty propagation is a technique to determine model output uncertainties based on the uncertainty in its input variables.
In this work, we investigate the hypothesis that while all samples are useful on average, some samples must be more useful than others.
We introduce a methodology to adaptively reorder MC samples and show how it results in reduction of computational expense of UP processes.
arXiv Detail & Related papers (2023-02-09T21:28:15Z) - Robust Output Analysis with Monte-Carlo Methodology [0.0]
In predictive modeling with simulation or machine learning, it is critical to accurately assess the quality of estimated values.
We propose a unified output analysis framework for simulation and machine learning outputs through the lens of Monte Carlo sampling.
arXiv Detail & Related papers (2022-07-27T16:21:59Z) - Robustness of Machine Learning Models Beyond Adversarial Attacks [0.0]
We show that the widely used concept of adversarial robustness and closely related metrics are not necessarily valid metrics for determining the robustness of ML models.
We propose a flexible approach that models possible perturbations in input data individually for each application.
This is then combined with a probabilistic approach that computes the likelihood that a real-world perturbation will change a prediction.
arXiv Detail & Related papers (2022-04-21T12:09:49Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.