Split Conformal Prediction in the Function Space with Neural Operators
- URL: http://arxiv.org/abs/2509.04623v1
- Date: Thu, 04 Sep 2025 19:12:04 GMT
- Title: Split Conformal Prediction in the Function Space with Neural Operators
- Authors: David Millard, Lars Lindemann, Ali Baheri,
- Abstract summary: Conformal prediction offers finite-sample guarantees in finite-dimensional spaces.<n>It does not directly extend to function-valued outputs.<n>This work extends split conformal prediction to function spaces following a two step method.
- Score: 7.619100818009453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification for neural operators remains an open problem in the infinite-dimensional setting due to the lack of finite-sample coverage guarantees over functional outputs. While conformal prediction offers finite-sample guarantees in finite-dimensional spaces, it does not directly extend to function-valued outputs. Existing approaches (Gaussian processes, Bayesian neural networks, and quantile-based operators) require strong distributional assumptions or yield conservative coverage. This work extends split conformal prediction to function spaces following a two step method. We first establish finite-sample coverage guarantees in a finite-dimensional space using a discretization map in the output function space. Then these guarantees are lifted to the function-space by considering the asymptotic convergence as the discretization is refined. To characterize the effect of resolution, we decompose the conformal radius into discretization, calibration, and misspecification components. This decomposition motivates a regression-based correction to transfer calibration across resolutions. Additionally, we propose two diagnostic metrics (conformal ensemble score and internal agreement) to quantify forecast degradation in autoregressive settings. Empirical results show that our method maintains calibrated coverage with less variation under resolution shifts and achieves better coverage in super-resolution tasks.
Related papers
- Coverage Guarantees for Pseudo-Calibrated Conformal Prediction under Distribution Shift [1.5861469511290378]
Conformal prediction offers marginal coverage guarantees if the data distribution shifts.<n>We analyze the use of pseudo-calibration as a tool to counter this performance loss.<n>We propose a source-tuned pseudo-calibration algorithm that interpolates between hard pseudo-labels and randomized labels.
arXiv Detail & Related papers (2026-02-16T16:48:39Z) - A Convex Loss Function for Set Prediction with Optimal Trade-offs Between Size and Conditional Coverage [0.554780083433538]
We consider supervised learning problems in which set predictions provide explicit uncertainty estimates.<n>We propose a convex loss function for nondecreasing subset-valued functions obtained as level sets of a real-valued function.<n>We show how to naturally obtain sets with optimized conditional coverage.
arXiv Detail & Related papers (2025-12-22T08:41:31Z) - Beyond Uncertainty Sets: Leveraging Optimal Transport to Extend Conformal Predictive Distribution to Multivariate Settings [0.14504054468850666]
Conformal prediction (CP) constructs uncertainty sets for model outputs with finite-sample coverage guarantees.<n>We show that optimal assignment is piecewise-constant across a fixed polyhedral partition of the score space.<n>This allows us to characterize the entire prediction set tractably, and provides the machinery to address a deeper limitation of prediction sets.
arXiv Detail & Related papers (2025-11-19T05:59:01Z) - Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations [57.179679246370114]
We identify the distribution of random perturbations that minimizes the estimator's variance as the perturbation stepsize tends to zero.<n>Our findings reveal that such desired perturbations can align directionally with the true gradient, instead of maintaining a fixed length.
arXiv Detail & Related papers (2025-10-22T19:06:39Z) - Guided Diffusion Sampling on Function Spaces with Applications to PDEs [112.09025802445329]
We propose a general framework for conditional sampling in PDE-based inverse problems.<n>This is accomplished by a function-space diffusion model and plug-and-play guidance for conditioning.<n>Our method achieves an average 32% accuracy improvement over state-of-the-art fixed-resolution diffusion baselines.
arXiv Detail & Related papers (2025-05-22T17:58:12Z) - Multivariate Latent Recalibration for Conditional Normalizing Flows [2.3020018305241337]
latent recalibration learns a transformation of the latent space with finite-sample bounds on latent calibration.<n>LR consistently improves latent calibration error and the negative log-likelihood of the recalibrated models.
arXiv Detail & Related papers (2025-05-22T13:08:20Z) - Learning Operators by Regularized Stochastic Gradient Descent with Operator-valued Kernels [5.663076715852465]
We investigate regularized convergence encoder descent (SGD) algorithms for estimating nonlinear operators from a Polish space to a separable Hilbert space.<n>We present a new technique for deriving bounds with high probability for general SGD schemes.
arXiv Detail & Related papers (2025-04-25T08:57:38Z) - Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Sampling from Gaussian Process Posteriors using Stochastic Gradient
Descent [43.097493761380186]
gradient algorithms are an efficient method of approximately solving linear systems.
We show that gradient descent produces accurate predictions, even in cases where it does not converge quickly to the optimum.
Experimentally, gradient descent achieves state-of-the-art performance on sufficiently large-scale or ill-conditioned regression tasks.
arXiv Detail & Related papers (2023-06-20T15:07:37Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Statistical Optimality of Divide and Conquer Kernel-based Functional
Linear Regression [1.7227952883644062]
This paper studies the convergence performance of divide-and-conquer estimators in the scenario that the target function does not reside in the underlying kernel space.
As a decomposition-based scalable approach, the divide-and-conquer estimators of functional linear regression can substantially reduce the algorithmic complexities in time and memory.
arXiv Detail & Related papers (2022-11-20T12:29:06Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Sparse Representations of Positive Functions via First and Second-Order
Pseudo-Mirror Descent [15.340540198612823]
We consider expected risk problems when the range of the estimator is required to be nonnegative.
We develop first and second-order variants of approximation mirror descent employing emphpseudo-gradients.
Experiments demonstrate favorable performance on ingeneous Process intensity estimation in practice.
arXiv Detail & Related papers (2020-11-13T21:54:28Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.