Calibrated Uncertainty Quantification for Operator Learning via
Conformal Prediction
- URL: http://arxiv.org/abs/2402.01960v2
- Date: Tue, 6 Feb 2024 04:34:47 GMT
- Title: Calibrated Uncertainty Quantification for Operator Learning via
Conformal Prediction
- Authors: Ziqi Ma, Kamyar Azizzadenesheli, Anima Anandkumar
- Abstract summary: We propose a risk-controlling quantile neural operator, a distribution-free, finite-sample functional calibration conformal prediction method.
We provide a theoretical calibration guarantee on the coverage rate, defined as the expected percentage of points on the function domain.
Empirical results on a 2D Darcy flow and a 3D car surface pressure prediction task validate our theoretical results.
- Score: 95.75771195913046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Operator learning has been increasingly adopted in scientific and engineering
applications, many of which require calibrated uncertainty quantification.
Since the output of operator learning is a continuous function, quantifying
uncertainty simultaneously at all points in the domain is challenging. Current
methods consider calibration at a single point or over one scalar function or
make strong assumptions such as Gaussianity. We propose a risk-controlling
quantile neural operator, a distribution-free, finite-sample functional
calibration conformal prediction method. We provide a theoretical calibration
guarantee on the coverage rate, defined as the expected percentage of points on
the function domain whose true value lies within the predicted uncertainty
ball. Empirical results on a 2D Darcy flow and a 3D car surface pressure
prediction task validate our theoretical results, demonstrating calibrated
coverage and efficient uncertainty bands outperforming baseline methods. In
particular, on the 3D problem, our method is the only one that meets the target
calibration percentage (percentage of test samples for which the uncertainty
estimates are calibrated) of 98%.
Related papers
- Optimizing Calibration by Gaining Aware of Prediction Correctness [30.619608580138802]
Cross-Entropy (CE) loss is widely used for calibrator training, which enforces the model to increase confidence on the ground truth class.
We propose a new post-hoc calibration objective derived from the aim of calibration.
arXiv Detail & Related papers (2024-04-19T17:25:43Z) - Consistent and Asymptotically Unbiased Estimation of Proper Calibration
Errors [23.819464242327257]
We propose a method that allows consistent estimation of all proper calibration errors and refinement terms.
We prove the relation between refinement and f-divergences, which implies information monotonicity in neural networks.
Our experiments validate the claimed properties of the proposed estimator and suggest that the selection of a post-hoc calibration method should be determined by the particular calibration error of interest.
arXiv Detail & Related papers (2023-12-14T01:20:08Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - DBCal: Density Based Calibration of classifier predictions for
uncertainty quantification [0.0]
We present a technique that quantifies the uncertainty of predictions from a machine learning method.
We prove that our method provides an accurate estimate of the probability that the outputs of two neural networks are correct.
arXiv Detail & Related papers (2022-04-01T01:03:41Z) - T-Cal: An optimal test for the calibration of predictive models [49.11538724574202]
We consider detecting mis-calibration of predictive models using a finite validation dataset as a hypothesis testing problem.
detecting mis-calibration is only possible when the conditional probabilities of the classes are sufficiently smooth functions of the predictions.
We propose T-Cal, a minimax test for calibration based on a de-biased plug-in estimator of the $ell$-Expected Error (ECE)
arXiv Detail & Related papers (2022-03-03T16:58:54Z) - Parameterized Temperature Scaling for Boosting the Expressive Power in
Post-Hoc Uncertainty Calibration [57.568461777747515]
We introduce a novel calibration method, Parametrized Temperature Scaling (PTS)
We demonstrate that the performance of accuracy-preserving state-of-the-art post-hoc calibrators is limited by their intrinsic expressive power.
We show with extensive experiments that our novel accuracy-preserving approach consistently outperforms existing algorithms across a large number of model architectures, datasets and metrics.
arXiv Detail & Related papers (2021-02-24T10:18:30Z) - Improving model calibration with accuracy versus uncertainty
optimization [17.056768055368384]
A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate.
We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration.
We demonstrate our approach with mean-field variational inference and compare with state-of-the-art methods.
arXiv Detail & Related papers (2020-12-14T20:19:21Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.