Optimal strategies for reject option classifiers
- URL: http://arxiv.org/abs/2101.12523v1
- Date: Fri, 29 Jan 2021 11:09:32 GMT
- Title: Optimal strategies for reject option classifiers
- Authors: V. Franc, D. Prusa, V. Voracek
- Abstract summary: In classification with a reject option, the classifier is allowed in uncertain cases to abstain from prediction.
We coin a symmetric definition, the bounded-coverage model, which seeks for a classifier with minimal selective risk and guaranteed coverage.
We propose two algorithms to learn the proper uncertainty score from examples for an arbitrary black-box classifier.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In classification with a reject option, the classifier is allowed in
uncertain cases to abstain from prediction. The classical cost-based model of a
reject option classifier requires the cost of rejection to be defined
explicitly. An alternative bounded-improvement model, avoiding the notion of
the reject cost, seeks for a classifier with a guaranteed selective risk and
maximal cover. We coin a symmetric definition, the bounded-coverage model,
which seeks for a classifier with minimal selective risk and guaranteed
coverage. We prove that despite their different formulations the three
rejection models lead to the same prediction strategy: a Bayes classifier
endowed with a randomized Bayes selection function. We define a notion of a
proper uncertainty score as a scalar summary of prediction uncertainty
sufficient to construct the randomized Bayes selection function. We propose two
algorithms to learn the proper uncertainty score from examples for an arbitrary
black-box classifier. We prove that both algorithms provide Fisher consistent
estimates of the proper uncertainty score and we demonstrate their efficiency
on different prediction problems including classification, ordinal regression
and structured output classification.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Awareness of uncertainty in classification using a multivariate model and multi-views [1.3048920509133808]
The proposed model regularizes uncertain predictions, and trains to calculate both the predictions and their uncertainty estimations.
Given the multi-view predictions together with their uncertainties and confidences, we proposed several methods to calculate final predictions.
The proposed methodology was tested using CIFAR-10 dataset with clean and noisy labels.
arXiv Detail & Related papers (2024-04-16T06:40:51Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Distribution-Free Inference for the Regression Function of Binary
Classification [0.0]
The paper presents a resampling framework to construct exact, distribution-free and non-asymptotically guaranteed confidence regions for the true regression function for any user-chosen confidence level.
It is proved that the constructed confidence regions are strongly consistent, that is, any false model is excluded in the long run with probability one.
arXiv Detail & Related papers (2023-08-03T15:52:27Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - AUC-based Selective Classification [5.406386303264086]
We propose a model-agnostic approach to associate a selection function to a given binary classifier.
We provide both theoretical justifications and a novel algorithm, called $AUCross$, to achieve such a goal.
Experiments show that $AUCross$ succeeds in trading-off coverage for AUC, improving over existing selective classification methods targeted at optimizing accuracy.
arXiv Detail & Related papers (2022-10-19T16:29:50Z) - Learning When to Say "I Don't Know" [0.5505634045241288]
We propose a new Reject Option Classification technique to identify and remove regions of uncertainty in the decision space.
We consider an alternative formulation by instead analyzing the complementary reject region and employing a validation set to learn per-class softmax thresholds.
We provide results showing the benefits of the proposed method over na"ively thresholding/uncalibrated softmax scores with 2-D points, imagery, and text classification datasets.
arXiv Detail & Related papers (2022-09-11T21:50:03Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Performance-Agnostic Fusion of Probabilistic Classifier Outputs [2.4206828137867107]
We propose a method for combining probabilistic outputs of classifiers to make a single consensus class prediction.
Our proposed method works well in situations where accuracy is the performance metric.
It does not output calibrated probabilities, so it is not suitable in situations where such probabilities are required for further processing.
arXiv Detail & Related papers (2020-09-01T16:53:29Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.