Conformal prediction for full and sparse polynomial chaos expansions
- URL: http://arxiv.org/abs/2601.16636v1
- Date: Fri, 23 Jan 2026 10:46:46 GMT
- Title: Conformal prediction for full and sparse polynomial chaos expansions
- Authors: A. Hatstatt, X. Zhu, B. Sudret,
- Abstract summary: Polynomial Chaos Expansions (PCEs) are widely recognized for their efficient computational performance in surrogate modeling.<n>We present the integration of two conformal prediction methods, namely the full conformal and the Jackknife+ approaches, into both full and sparse PCEs.
- Score: 0.5735035463793009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Polynomial Chaos Expansions (PCEs) are widely recognized for their efficient computational performance in surrogate modeling. Yet, a robust framework to quantify local model errors is still lacking. While the local uncertainty of PCE prediction can be captured using bootstrap resampling, other methods offering more rigorous statistical guarantees are needed, especially in the context of small training datasets. Recently, conformal predictions have demonstrated strong potential in machine learning, providing statistically robust and model-agnostic prediction intervals. Due to its generality and versatility, conformal prediction is especially valuable, as it can be adapted to suit a variety of problems, making it a compelling choice for PCE-based surrogate models. In this contribution, we explore its application to PCE-based surrogate models. More precisely, we present the integration of two conformal prediction methods, namely the full conformal and the Jackknife+ approaches, into both full and sparse PCEs. For full PCEs, we introduce computational shortcuts inspired by the inherent structure of regression methods to optimize the implementation of both conformal methods. For sparse PCEs, we incorporate the two approaches with appropriate modifications to the inference strategy, thereby circumventing the non-symmetrical nature of the regression algorithm and ensuring valid prediction intervals. Our developments yield better-calibrated prediction intervals for both full and sparse PCEs, achieving superior coverage over existing approaches, such as the bootstrap, while maintaining a moderate computational cost.
Related papers
- Fast Conformal Prediction using Conditional Interquantile Intervals [9.881784717196675]
We introduce Conformal Interquantile Regression (CIR), a conformal regression method that constructs near-minimal prediction intervals with guaranteed coverage.<n>We also propose CIR+, which enhances CIR by incorporating a width-based selection rule for interquantile intervals.
arXiv Detail & Related papers (2026-01-06T07:01:08Z) - Model-Based Policy Adaptation for Closed-Loop End-to-End Autonomous Driving [54.46325690390831]
We propose Model-based Policy Adaptation (MPA), a general framework that enhances the robustness and safety of pretrained E2E driving agents during deployment.<n>MPA first generates diverse counterfactual trajectories using a geometry-consistent simulation engine.<n>MPA trains a diffusion-based policy adapter to refine the base policy's predictions and a multi-step Q value model to evaluate long-term outcomes.
arXiv Detail & Related papers (2025-11-26T17:01:41Z) - Bridging the Gap Between Bayesian Deep Learning and Ensemble Weather Forecasts [100.26854618129039]
Weather forecasting is fundamentally challenged by the chaotic nature of the atmosphere.<n>Recent advances in Bayesian Deep Learning (BDL) offer a promising but often disconnected alternative.<n>We bridge these paradigms through a unified hybrid BDL framework for ensemble weather forecasting.
arXiv Detail & Related papers (2025-11-18T07:49:52Z) - End-to-End Probabilistic Framework for Learning with Hard Constraints [43.297232073870866]
We present ProbHardE2E, a probabilistic forecasting framework that incorporates hard operational/physical constraints.<n>Our methodology uses a novel differentiable probabilistic projection layer (DPPL) that can be combined with a wide range of neural network architectures.
arXiv Detail & Related papers (2025-06-08T05:29:50Z) - Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing [58.52119063742121]
Retraining a model using its own predictions together with the original, potentially noisy labels is a well-known strategy for improving the model performance.<n>This paper addresses the question of how to optimally combine the model's predictions and the provided labels.<n>Our main contribution is the derivation of the Bayes optimal aggregator function to combine the current model's predictions and the given labels.
arXiv Detail & Related papers (2025-05-21T07:16:44Z) - Unveil Sources of Uncertainty: Feature Contribution to Conformal Prediction Intervals [0.3495246564946556]
We propose a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP)<n>We define cooperative games where CP interval properties-such as width and bounds-serve as value functions, we attribute predictive uncertainty to input features.<n>Our experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach.
arXiv Detail & Related papers (2025-05-19T13:49:05Z) - Minimum Volume Conformal Sets for Multivariate Regression [44.99833362998488]
Conformal prediction provides a principled framework for constructing predictive sets with finite-sample validity.<n>We propose an optimization-driven framework based on a novel loss function that directly learns minimum-conformity covering sets.<n>Our approach optimize over prediction sets defined by arbitrary norm balls, including single and multi-norm formulations.
arXiv Detail & Related papers (2025-03-24T18:54:22Z) - Optimal Transport-based Conformal Prediction [8.302146576157497]
Conformal Prediction (CP) is a principled framework for uncertainty in blackbox learning models.<n>We introduce a novel CP procedure handling prediction score functions through a lens.<n>We then adapt our method for quantifying multi-output regression and multiclass classification.
arXiv Detail & Related papers (2025-01-31T09:48:28Z) - Achieving $\widetilde{\mathcal{O}}(\sqrt{T})$ Regret in Average-Reward POMDPs with Known Observation Models [69.1820058966619]
We tackle average-reward infinite-horizon POMDPs with an unknown transition model.<n>We present a novel and simple estimator that overcomes this barrier.
arXiv Detail & Related papers (2025-01-30T22:29:41Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - End-to-End Reinforcement Learning of Koopman Models for Economic Nonlinear Model Predictive Control [45.84205238554709]
We present a method for reinforcement learning of Koopman surrogate models for optimal performance as part of (e)NMPC.
We show that the end-to-end trained models outperform those trained using system identification in (e)NMPC.
arXiv Detail & Related papers (2023-08-03T10:21:53Z) - UTOPIA: Universally Trainable Optimal Prediction Intervals Aggregation [9.387706860375461]
We introduce a novel strategy called Universally Trainable Optimal Predictive Intervals Aggregation (UTOPIA)
This technique excels in efficiently aggregating multiple prediction intervals while maintaining a small average width of the prediction band and ensuring coverage.
It is validated through its application to synthetic data and two real-world datasets in finance and macroeconomics.
arXiv Detail & Related papers (2023-06-28T20:38:37Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.