Ask for More Than Bayes Optimal: A Theory of Indecisions for Classification
- URL: http://arxiv.org/abs/2412.12807v1
- Date: Tue, 17 Dec 2024 11:25:51 GMT
- Title: Ask for More Than Bayes Optimal: A Theory of Indecisions for Classification
- Authors: Mohamed Ndaoud, Peter Radchenko, Bradley Rava,
- Abstract summary: We show that we can control the misclassification rate of a classifier up to any user specified level, while only abstaining from the minimum necessary amount of decisions.
In many problem settings, the user could obtain a dramatic decrease in misclassification while only paying a comparatively small price in terms of indecisions.
- Score: 1.8434042562191815
- License:
- Abstract: Selective classification frameworks are useful tools for automated decision making in highly risky scenarios, since they allow for a classifier to only make highly confident decisions, while abstaining from making a decision when it is not confident enough to do so, which is otherwise known as an indecision. For a given level of classification accuracy, we aim to make as many decisions as possible. For many problems, this can be achieved without abstaining from making decisions. But when the problem is hard enough, we show that we can still control the misclassification rate of a classifier up to any user specified level, while only abstaining from the minimum necessary amount of decisions, even if this level of misclassification is smaller than the Bayes optimal error rate. In many problem settings, the user could obtain a dramatic decrease in misclassification while only paying a comparatively small price in terms of indecisions.
Related papers
- On support vector machines under a multiple-cost scenario [1.743685428161914]
Support Vector Machine (SVM) is a powerful tool in binary classification.
We propose a novel SVM model in which misclassification costs are considered by incorporating performance constraints.
arXiv Detail & Related papers (2023-12-22T16:12:25Z) - Fair Classifiers that Abstain without Harm [24.90899074869189]
In critical applications, it is vital for classifiers to defer decision-making to humans.
We propose a post-hoc method that makes existing classifiers selectively abstain from predicting certain samples.
Our framework outperforms existing methods in terms of fairness disparity without sacrificing accuracy at similar abstention rates.
arXiv Detail & Related papers (2023-10-09T23:07:28Z) - Improving Selective Visual Question Answering by Learning from Your
Peers [74.20167944693424]
Visual Question Answering (VQA) models can have difficulties abstaining from answering when they are wrong.
We propose Learning from Your Peers (LYP) approach for training multimodal selection functions for making abstention decisions.
Our approach uses predictions from models trained on distinct subsets of the training data as targets for optimizing a Selective VQA model.
arXiv Detail & Related papers (2023-06-14T21:22:01Z) - Minimax-Bayes Reinforcement Learning [2.7456483236562437]
This paper studies (sometimes approximate) minimax-Bayes solutions for various reinforcement learning problems.
We find that while the worst-case prior depends on the setting, the corresponding minimax policies are more robust than those that assume a standard (i.e. uniform) prior.
arXiv Detail & Related papers (2023-02-21T17:10:21Z) - Is the Performance of My Deep Network Too Good to Be True? A Direct
Approach to Estimating the Bayes Error in Binary Classification [86.32752788233913]
In classification problems, the Bayes error can be used as a criterion to evaluate classifiers with state-of-the-art performance.
We propose a simple and direct Bayes error estimator, where we just take the mean of the labels that show emphuncertainty of the classes.
Our flexible approach enables us to perform Bayes error estimation even for weakly supervised data.
arXiv Detail & Related papers (2022-02-01T13:22:26Z) - Online Selective Classification with Limited Feedback [82.68009460301585]
We study selective classification in the online learning model, wherein a predictor may abstain from classifying an instance.
Two salient aspects of the setting we consider are that the data may be non-realisable, due to which abstention may be a valid long-term action.
We construct simple versioning-based schemes for any $mu in (0,1],$ that make most $Tmu$ mistakes while incurring smash$tildeO(T1-mu)$ excess abstention against adaptive adversaries.
arXiv Detail & Related papers (2021-10-27T08:00:53Z) - Calibrating Predictions to Decisions: A Novel Approach to Multi-Class
Calibration [118.26862029820447]
We introduce a new notion -- emphdecision calibration -- that requires the predicted distribution and true distribution to be indistinguishable'' to a set of downstream decision-makers.
Decision calibration improves decision-making on skin lesions and ImageNet classification with modern neural network.
arXiv Detail & Related papers (2021-07-12T20:17:28Z) - Modularity in Reinforcement Learning via Algorithmic Independence in
Credit Assignment [79.5678820246642]
We show that certain action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process.
arXiv Detail & Related papers (2021-06-28T21:29:13Z) - Classification with abstention but without disparities [5.025654873456756]
We build a general purpose classification algorithm, which is able to abstain from prediction, while avoiding disparate impact.
We establish finite sample risk, fairness, and abstention guarantees for the proposed algorithm.
Our method empirically shows that moderate abstention rates allow to bypass the risk-fairness trade-off.
arXiv Detail & Related papers (2021-02-24T12:43:55Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.