Cost-Sensitive Conformal Training with Provably Controllable Learning Bounds
- URL: http://arxiv.org/abs/2511.17861v1
- Date: Sat, 22 Nov 2025 01:11:44 GMT
- Title: Cost-Sensitive Conformal Training with Provably Controllable Learning Bounds
- Authors: Xuesong Jia, Yuanjie Shi, Ziquan Liu, Yi Xu, Yan Yan,
- Abstract summary: Conformal prediction is a framework to quantify the predictive uncertainty of machine learning models.<n>To align the uncertainty measured by CP, conformal training methods minimize the size of the prediction sets.<n>We propose a simple cost-sensitive conformal training algorithm that does not rely on the indicator approximation mechanism.
- Score: 21.86960662161151
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conformal prediction (CP) is a general framework to quantify the predictive uncertainty of machine learning models that uses a set prediction to include the true label with a valid probability. To align the uncertainty measured by CP, conformal training methods minimize the size of the prediction sets. A typical way is to use a surrogate indicator function, usually Sigmoid or Gaussian error function. However, these surrogate functions do not have a uniform error bound to the indicator function, leading to uncontrollable learning bounds. In this paper, we propose a simple cost-sensitive conformal training algorithm that does not rely on the indicator approximation mechanism. Specifically, we theoretically show that minimizing the expected size of prediction sets is upper bounded by the expected rank of true labels. To this end, we develop a rank weighting strategy that assigns the weight using the rank of true label on each data sample. Our analysis provably demonstrates the tightness between the proposed weighted objective and the expected size of conformal prediction sets. Extensive experiments verify the validity of our theoretical insights, and superior empirical performance over other conformal training in terms of predictive efficiency with 21.38% reduction for average prediction set size.
Related papers
- Another Fit Bites the Dust: Conformal Prediction as a Calibration Standard for Machine Learning in High-Energy Physics [0.0]
Conformal prediction provides a distribution-free framework for calibrating arbitrary predictive models.<n>We show that a single conformal formalism can be applied across regression, binary and multi-class classification, anomaly detection, and generative modelling.<n>We argue that conformal calibration should be adopted as a standard component of machine-learning pipelines in collider physics.
arXiv Detail & Related papers (2025-12-18T20:31:25Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.<n>Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.<n>We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Beyond Conformal Predictors: Adaptive Conformal Inference with Confidence Predictors [1.3812010983144802]
This study shows that the desirable properties of Adaptive Conformal Inference (ACI) do not require the use of Conformal Predictors (CP)<n>We empirically investigate the performance of Non-Conformal Confidence Predictors (NCCP) against CP when used with ACI on non-exchangeable data.
arXiv Detail & Related papers (2024-09-23T21:02:33Z) - On the Expected Size of Conformal Prediction Sets [24.161372736642157]
We theoretically quantify the expected size of the prediction sets under the split conformal prediction framework.
As this precise formulation cannot usually be calculated directly, we derive point estimates and high-probability bounds interval.
We corroborate the efficacy of our results with experiments on real-world datasets for both regression and classification problems.
arXiv Detail & Related papers (2023-06-12T17:22:57Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Predictive Inference with Feature Conformal Prediction [80.77443423828315]
We propose feature conformal prediction, which extends the scope of conformal prediction to semantic feature spaces.
From a theoretical perspective, we demonstrate that feature conformal prediction provably outperforms regular conformal prediction under mild assumptions.
Our approach could be combined with not only vanilla conformal prediction, but also other adaptive conformal prediction methods.
arXiv Detail & Related papers (2022-10-01T02:57:37Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Optimized conformal classification using gradient descent approximation [0.2538209532048866]
Conformal predictors allow predictions to be made with a user-defined confidence level.
We consider an approach to train the conformal predictor directly with maximum predictive efficiency.
We test the method on several real world data sets and find that the method is promising.
arXiv Detail & Related papers (2021-05-24T13:14:41Z) - Distribution-Free, Risk-Controlling Prediction Sets [112.9186453405701]
We show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level.
Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.
arXiv Detail & Related papers (2021-01-07T18:59:33Z) - AutoCP: Automated Pipelines for Accurate Prediction Intervals [84.16181066107984]
This paper proposes an AutoML framework called Automatic Machine Learning for Conformal Prediction (AutoCP)
Unlike the familiar AutoML frameworks that attempt to select the best prediction model, AutoCP constructs prediction intervals that achieve the user-specified target coverage rate.
We tested AutoCP on a variety of datasets and found that it significantly outperforms benchmark algorithms.
arXiv Detail & Related papers (2020-06-24T23:13:11Z) - Training conformal predictors [0.0]
Efficiency criteria for conformal prediction, such as emphobserved fuzziness, are commonly used to emphevaluate the performance of given conformal predictors.
Here, we investigate whether it is possible to exploit such criteria to emphlearn classifiers.
arXiv Detail & Related papers (2020-05-14T14:47:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.