Conformal Loss-Controlling Prediction
- URL: http://arxiv.org/abs/2301.02424v2
- Date: Tue, 23 Jan 2024 00:48:54 GMT
- Title: Conformal Loss-Controlling Prediction
- Authors: Di Wang, Ping Wang, Zhong Ji, Xiaojun Yang, Hongyue Li
- Abstract summary: Conformal prediction is a learning framework controlling prediction coverage of prediction sets.
This work proposes a learning framework named conformal loss-controlling prediction, which extends conformal prediction to the situation where the value of a loss function needs to be controlled.
- Score: 23.218535051437588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformal prediction is a learning framework controlling prediction coverage
of prediction sets, which can be built on any learning algorithm for point
prediction. This work proposes a learning framework named conformal
loss-controlling prediction, which extends conformal prediction to the
situation where the value of a loss function needs to be controlled. Different
from existing works about risk-controlling prediction sets and conformal risk
control with the purpose of controlling the expected values of loss functions,
the proposed approach in this paper focuses on the loss for any test object,
which is an extension of conformal prediction from miscoverage loss to some
general loss. The controlling guarantee is proved under the assumption of
exchangeability of data in finite-sample cases and the framework is tested
empirically for classification with a class-varying loss and statistical
postprocessing of numerical weather forecasting applications, which are
introduced as point-wise classification and point-wise regression problems. All
theoretical analysis and experimental results confirm the effectiveness of our
loss-controlling approach.
Related papers
- Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Conformal Prediction with Missing Values [19.18178194789968]
We first show that the marginal coverage guarantee of conformal prediction holds on imputed data for any missingness distribution.
We then show that a universally consistent quantile regression algorithm trained on the imputed data is Bayes optimal for the pinball risk.
arXiv Detail & Related papers (2023-06-05T09:28:03Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Loss-Controlling Calibration for Predictive Models [5.51361762392299]
We propose a learning framework for calibrating predictive models to make loss-controlling prediction for exchangeable data.
By comparison, the predictors built by the proposed loss-controlling approach are not limited to set predictors.
Our proposed method is applied to selective regression and high-impact weather forecasting problems.
arXiv Detail & Related papers (2023-01-11T09:44:55Z) - Quantile Risk Control: A Flexible Framework for Bounding the Probability
of High-Loss Predictions [11.842061466957686]
We propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor.
We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics.
arXiv Detail & Related papers (2022-12-27T22:08:29Z) - Conformal Risk Control [20.65019607005074]
We extend conformal prediction to control the expected value of any monotone loss function.
We also introduce extensions of the idea to distribution shift, quantile risk control, multiple and adversarial risk control, and expectations of U-statistics.
arXiv Detail & Related papers (2022-08-04T17:59:44Z) - Unifying Lower Bounds on Prediction Dimension of Consistent Convex
Surrogates [12.751555473216683]
Given a prediction task, understanding when one can and cannot design a consistent convex surrogate loss is an important area of machine learning research.
We unify these settings using tools from property elicitation, and give a general lower bound on prediction dimension.
Our lower bound tightens existing results in the case of discrete predictions, showing that previous calibration-based bounds can largely be recovered via property elicitation.
For continuous estimation, our lower bound resolves on open problem on estimating measures of risk and uncertainty.
arXiv Detail & Related papers (2021-02-16T15:29:05Z) - Distribution-Free, Risk-Controlling Prediction Sets [112.9186453405701]
We show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level.
Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.
arXiv Detail & Related papers (2021-01-07T18:59:33Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.