C-Adapter: Adapting Deep Classifiers for Efficient Conformal Prediction Sets
- URL: http://arxiv.org/abs/2410.09408v1
- Date: Sat, 12 Oct 2024 07:28:54 GMT
- Title: C-Adapter: Adapting Deep Classifiers for Efficient Conformal Prediction Sets
- Authors: Kangdao Liu, Hao Zeng, Jianguo Huang, Huiping Zhuang, Chi-Man Vong, Hongxin Wei,
- Abstract summary: We introduce textbfConformal Adapter (C-Adapter) to enhance the efficiency of conformal predictors without sacrificing accuracy.
In particular, we implement the adapter as a class of intra order-preserving functions and tune it with our proposed loss.
Using C-Adapter, the model tends to produce extremely high non-conformity scores for incorrect labels.
- Score: 19.318945675529456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformal prediction, as an emerging uncertainty quantification technique, typically functions as post-hoc processing for the outputs of trained classifiers. To optimize the classifier for maximum predictive efficiency, Conformal Training rectifies the training objective with a regularization that minimizes the average prediction set size at a specific error rate. However, the regularization term inevitably deteriorates the classification accuracy and leads to suboptimal efficiency of conformal predictors. To address this issue, we introduce \textbf{Conformal Adapter} (C-Adapter), an adapter-based tuning method to enhance the efficiency of conformal predictors without sacrificing accuracy. In particular, we implement the adapter as a class of intra order-preserving functions and tune it with our proposed loss that maximizes the discriminability of non-conformity scores between correctly and randomly matched data-label pairs. Using C-Adapter, the model tends to produce extremely high non-conformity scores for incorrect labels, thereby enhancing the efficiency of prediction sets across different coverage rates. Extensive experiments demonstrate that C-Adapter can effectively adapt various classifiers for efficient prediction sets, as well as enhance the conformal training method.
Related papers
- Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Delving into temperature scaling for adaptive conformal prediction [10.340903334800787]
Conformal prediction, as an emerging uncertainty qualification technique, constructs prediction sets that are guaranteed to contain the true label with pre-defined probability.
We show that current confidence calibration methods (e.g., temperature scaling) normally lead to larger prediction sets in adaptive conformal prediction.
We propose $Conformal$ $Temperature$ $Scaling$ (ConfTS), a variant of temperature scaling that aims to improve the efficiency of adaptive conformal prediction.
arXiv Detail & Related papers (2024-02-06T19:27:48Z) - PAC-Bayes Generalization Certificates for Learned Inductive Conformal
Prediction [27.434939269672288]
We use PAC-Bayes theory to obtain generalization bounds on the coverage and the efficiency of set-valued predictors.
We leverage these theoretical results to provide a practical algorithm for using calibration data to fine-tune the parameters of a model and score function.
We evaluate the approach on regression and classification tasks, and outperform baselines calibrated using a Hoeffding bound-based PAC guarantee on ICP.
arXiv Detail & Related papers (2023-12-07T19:40:44Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - ProTeCt: Prompt Tuning for Taxonomic Open Set Classification [59.59442518849203]
Few-shot adaptation methods do not fare well in the taxonomic open set (TOS) setting.
We propose a prompt tuning technique that calibrates the hierarchical consistency of model predictions.
A new Prompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed to calibrate classification across label set granularities.
arXiv Detail & Related papers (2023-06-04T02:55:25Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Optimized conformal classification using gradient descent approximation [0.2538209532048866]
Conformal predictors allow predictions to be made with a user-defined confidence level.
We consider an approach to train the conformal predictor directly with maximum predictive efficiency.
We test the method on several real world data sets and find that the method is promising.
arXiv Detail & Related papers (2021-05-24T13:14:41Z) - Training conformal predictors [0.0]
Efficiency criteria for conformal prediction, such as emphobserved fuzziness, are commonly used to emphevaluate the performance of given conformal predictors.
Here, we investigate whether it is possible to exploit such criteria to emphlearn classifiers.
arXiv Detail & Related papers (2020-05-14T14:47:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.