Selective Classification via One-Sided Prediction
- URL: http://arxiv.org/abs/2010.07853v4
- Date: Sat, 23 Oct 2021 23:29:23 GMT
- Title: Selective Classification via One-Sided Prediction
- Authors: Aditya Gangrade, Anil Kag, Venkatesh Saligrama
- Abstract summary: One-sided prediction (OSP) based relaxation yields an SC scheme that attains near-optimal coverage in the practically relevant high target accuracy regime.
We theoretically derive bounds generalization for SC and OSP, and empirically we show that our scheme strongly outperforms state of the art methods in coverage at small error levels.
- Score: 54.05407231648068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method for selective classification (SC), a problem which
allows a classifier to abstain from predicting some instances, thus trading off
accuracy against coverage (the fraction of instances predicted). In contrast to
prior gating or confidence-set based work, our proposed method optimises a
collection of class-wise decoupled one-sided empirical risks, and is in essence
a method for explicitly finding the largest decision sets for each class that
have few false positives. This one-sided prediction (OSP) based relaxation
yields an SC scheme that attains near-optimal coverage in the practically
relevant high target accuracy regime, and further admits efficient
implementation, leading to a flexible and principled method for SC. We
theoretically derive generalization bounds for SC and OSP, and empirically we
show that our scheme strongly outperforms state of the art methods in coverage
at small error levels.
Related papers
- Weighted Aggregation of Conformity Scores for Classification [9.559062601251464]
Conformal prediction is a powerful framework for constructing prediction sets with valid coverage guarantees.
We propose a novel approach that combines multiple score functions to improve the performance of conformal predictors.
arXiv Detail & Related papers (2024-07-14T14:58:03Z) - Confidence-aware Contrastive Learning for Selective Classification [20.573658672018066]
This work provides a generalization bound for selective classification, disclosing that optimizing feature layers helps improve the performance of selective classification.
Inspired by this theory, we propose to explicitly improve the selective classification model at the feature level for the first time, leading to a novel Confidence-aware Contrastive Learning method for Selective Classification, CCL-SC.
arXiv Detail & Related papers (2024-06-07T08:43:53Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - AUC-based Selective Classification [5.406386303264086]
We propose a model-agnostic approach to associate a selection function to a given binary classifier.
We provide both theoretical justifications and a novel algorithm, called $AUCross$, to achieve such a goal.
Experiments show that $AUCross$ succeeds in trading-off coverage for AUC, improving over existing selective classification methods targeted at optimizing accuracy.
arXiv Detail & Related papers (2022-10-19T16:29:50Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - Selective Probabilistic Classifier Based on Hypothesis Testing [14.695979686066066]
We propose a simple yet effective method to deal with the violation of the Closed-World Assumption for a classifier.
The proposed method is a rejection option based on hypothesis testing with probabilistic networks.
It is shown that the proposed method can achieve a broader range of operation and cover a lower False Positive Ratio than the alternative.
arXiv Detail & Related papers (2021-05-09T08:55:56Z) - Re-Assessing the "Classify and Count" Quantification Method [88.60021378715636]
"Classify and Count" (CC) is often a biased estimator.
Previous works have failed to use properly optimised versions of CC.
We argue that, while still inferior to some cutting-edge methods, they deliver near-state-of-the-art accuracy.
arXiv Detail & Related papers (2020-11-04T21:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.