Training conformal predictors
- URL: http://arxiv.org/abs/2005.07037v1
- Date: Thu, 14 May 2020 14:47:30 GMT
- Title: Training conformal predictors
- Authors: Nicolo Colombo and Vladimir Vovk
- Abstract summary: Efficiency criteria for conformal prediction, such as emphobserved fuzziness, are commonly used to emphevaluate the performance of given conformal predictors.
Here, we investigate whether it is possible to exploit such criteria to emphlearn classifiers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficiency criteria for conformal prediction, such as \emph{observed
fuzziness} (i.e., the sum of p-values associated with false labels), are
commonly used to \emph{evaluate} the performance of given conformal predictors.
Here, we investigate whether it is possible to exploit efficiency criteria to
\emph{learn} classifiers, both conformal predictors and point classifiers, by
using such criteria as training objective functions. The proposed idea is
implemented for the problem of binary classification of hand-written digits. By
choosing a 1-dimensional model class (with one real-valued free parameter), we
can solve the optimization problems through an (approximate) exhaustive search
over (a discrete version of) the parameter space. Our empirical results suggest
that conformal predictors trained by minimizing their observed fuzziness
perform better than conformal predictors trained in the traditional way by
minimizing the \emph{prediction error} of the corresponding point classifier.
They also have a reasonable performance in terms of their prediction error on
the test set.
Related papers
- C-Adapter: Adapting Deep Classifiers for Efficient Conformal Prediction Sets [19.318945675529456]
We introduce textbfConformal Adapter (C-Adapter) to enhance the efficiency of conformal predictors without sacrificing accuracy.
In particular, we implement the adapter as a class of intra order-preserving functions and tune it with our proposed loss.
Using C-Adapter, the model tends to produce extremely high non-conformity scores for incorrect labels.
arXiv Detail & Related papers (2024-10-12T07:28:54Z) - Weighted Aggregation of Conformity Scores for Classification [9.559062601251464]
Conformal prediction is a powerful framework for constructing prediction sets with valid coverage guarantees.
We propose a novel approach that combines multiple score functions to improve the performance of conformal predictors.
arXiv Detail & Related papers (2024-07-14T14:58:03Z) - Trustworthy Classification through Rank-Based Conformal Prediction Sets [9.559062601251464]
We propose a novel conformal prediction method that employs a rank-based score function suitable for classification models.
Our approach constructs prediction sets that achieve the desired coverage rate while managing their size.
Our contributions include a novel conformal prediction method, theoretical analysis, and empirical evaluation.
arXiv Detail & Related papers (2024-07-05T10:43:41Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Predictive Inference with Feature Conformal Prediction [80.77443423828315]
We propose feature conformal prediction, which extends the scope of conformal prediction to semantic feature spaces.
From a theoretical perspective, we demonstrate that feature conformal prediction provably outperforms regular conformal prediction under mild assumptions.
Our approach could be combined with not only vanilla conformal prediction, but also other adaptive conformal prediction methods.
arXiv Detail & Related papers (2022-10-01T02:57:37Z) - ProBoost: a Boosting Method for Probabilistic Classifiers [55.970609838687864]
ProBoost is a new boosting algorithm for probabilistic classifiers.
It uses the uncertainty of each training sample to determine the most challenging/uncertain ones.
It produces a sequence that progressively focuses on the samples found to have the highest uncertainty.
arXiv Detail & Related papers (2022-09-04T12:49:20Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Optimized conformal classification using gradient descent approximation [0.2538209532048866]
Conformal predictors allow predictions to be made with a user-defined confidence level.
We consider an approach to train the conformal predictor directly with maximum predictive efficiency.
We test the method on several real world data sets and find that the method is promising.
arXiv Detail & Related papers (2021-05-24T13:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.