Adjusting Regression Models for Conditional Uncertainty Calibration
- URL: http://arxiv.org/abs/2409.17466v1
- Date: Thu, 26 Sep 2024 01:55:45 GMT
- Title: Adjusting Regression Models for Conditional Uncertainty Calibration
- Authors: Ruijiang Gao, Mingzhang Yin, James McInerney, Nathan Kallus
- Abstract summary: We propose a novel algorithm to train a regression function to improve the conditional coverage after applying the split conformal prediction procedure.
We establish an upper bound for the miscoverage gap between the conditional coverage and the nominal coverage rate and propose an end-to-end algorithm to control this upper bound.
- Score: 46.69079637538012
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Conformal Prediction methods have finite-sample distribution-free marginal
coverage guarantees. However, they generally do not offer conditional coverage
guarantees, which can be important for high-stakes decisions. In this paper, we
propose a novel algorithm to train a regression function to improve the
conditional coverage after applying the split conformal prediction procedure.
We establish an upper bound for the miscoverage gap between the conditional
coverage and the nominal coverage rate and propose an end-to-end algorithm to
control this upper bound. We demonstrate the efficacy of our method empirically
on synthetic and real-world datasets.
Related papers
- Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Conformal Prediction with Learned Features [22.733758606168873]
We propose Partition Learning Conformal Prediction (PLCP) to improve conditional validity of prediction sets.
We implement PLCP efficiently with gradient alternating descent, utilizing off-the-shelf machine learning models.
Our experimental results over four real-world and synthetic datasets show the superior performance of PLCP.
arXiv Detail & Related papers (2024-04-26T15:43:06Z) - Split Localized Conformal Prediction [20.44976410408424]
We propose a modified non-conformity score by leveraging local approximation of the conditional distribution.
The modified score inherits the spirit of split conformal methods, which is simple and efficient compared with full conformal methods.
arXiv Detail & Related papers (2022-06-27T07:53:38Z) - Conformal Off-Policy Prediction in Contextual Bandits [54.67508891852636]
Conformal off-policy prediction can output reliable predictive intervals for the outcome under a new target policy.
We provide theoretical finite-sample guarantees without making any additional assumptions beyond the standard contextual bandit setup.
arXiv Detail & Related papers (2022-06-09T10:39:33Z) - Approximate Conditional Coverage via Neural Model Approximations [0.030458514384586396]
We analyze a data-driven procedure for obtaining empirically reliable approximate conditional coverage.
We demonstrate the potential for substantial (and otherwise unknowable) under-coverage with split-conformal alternatives with marginal coverage guarantees.
arXiv Detail & Related papers (2022-05-28T02:59:05Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Privacy Preserving Recalibration under Domain Shift [119.21243107946555]
We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints.
We also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets.
arXiv Detail & Related papers (2020-08-21T18:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.