Predictor-Rejector Multi-Class Abstention: Theoretical Analysis and Algorithms
- URL: http://arxiv.org/abs/2310.14772v2
- Date: Sun, 31 Mar 2024 09:05:24 GMT
- Title: Predictor-Rejector Multi-Class Abstention: Theoretical Analysis and Algorithms
- Authors: Anqi Mao, Mehryar Mohri, Yutao Zhong,
- Abstract summary: We study the key framework of learning with abstention in the multi-class classification setting.
In this setting, the learner can choose to abstain from making a prediction with some pre-defined cost.
We introduce several new families of surrogate losses for which we prove strong non-asymptotic and hypothesis set-specific consistency guarantees.
- Score: 30.389055604165222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the key framework of learning with abstention in the multi-class classification setting. In this setting, the learner can choose to abstain from making a prediction with some pre-defined cost. We present a series of new theoretical and algorithmic results for this learning problem in the predictor-rejector framework. We introduce several new families of surrogate losses for which we prove strong non-asymptotic and hypothesis set-specific consistency guarantees, thereby resolving positively two existing open questions. These guarantees provide upper bounds on the estimation error of the abstention loss function in terms of that of the surrogate loss. We analyze both a single-stage setting where the predictor and rejector are learned simultaneously and a two-stage setting crucial in applications, where the predictor is learned in a first stage using a standard surrogate loss such as cross-entropy. These guarantees suggest new multi-class abstention algorithms based on minimizing these surrogate losses. We also report the results of extensive experiments comparing these algorithms to the current state-of-the-art algorithms on CIFAR-10, CIFAR-100 and SVHN datasets. Our results demonstrate empirically the benefit of our new surrogate losses and show the remarkable performance of our broadly applicable two-stage abstention algorithm.
Related papers
- Regression with Multi-Expert Deferral [30.389055604165222]
Learning to defer with multiple experts is a framework where the learner can choose to defer the prediction to several experts.
We present a novel framework of regression with deferral, which involves deferring the prediction to multiple experts.
We introduce new surrogate loss functions for both scenarios and prove that they are supported by $H$-consistency bounds.
arXiv Detail & Related papers (2024-03-28T15:26:38Z) - Misclassification excess risk bounds for 1-bit matrix completion [0.0]
This study investigates the misclassification excess risk bound in the context of 1-bit matrix completion.
Matrix completion has garnered considerable attention in the last two decades due to its diverse applications across various fields.
arXiv Detail & Related papers (2023-12-20T11:42:49Z) - Principled Approaches for Learning to Defer with Multiple Experts [30.389055604165222]
We introduce a new family of surrogate losses specifically tailored for the multiple-expert setting.
We prove that these surrogate losses benefit from strong $H$-consistency bounds.
arXiv Detail & Related papers (2023-10-23T10:19:09Z) - Theoretically Grounded Loss Functions and Algorithms for Score-Based Multi-Class Abstention [30.389055604165222]
We introduce new families of surrogate losses for the abstention loss function.
We prove strong non-asymptotic and hypothesis set-specific consistency guarantees for these surrogate losses.
Our results show that the relative performance of the state-of-the-art score-based surrogate losses can vary across datasets.
arXiv Detail & Related papers (2023-10-23T10:13:35Z) - A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment
for Imbalanced Learning [129.63326990812234]
We propose a technique named data-dependent contraction to capture how modified losses handle different classes.
On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment.
arXiv Detail & Related papers (2023-10-07T09:15:08Z) - Ranking with Abstention [27.3569897539488]
We introduce a novel framework of ranking with abstention, where the learner can abstain from making prediction at some limited cost $c$.
We present a series of $H$-consistency bounds for both the family of linear functions and that of neural networks with one hidden-layer.
arXiv Detail & Related papers (2023-07-05T05:37:13Z) - A Generalized Unbiased Risk Estimator for Learning with Augmented
Classes [70.20752731393938]
Given unlabeled data, an unbiased risk estimator (URE) can be derived, which can be minimized for LAC with theoretical guarantees.
We propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees.
arXiv Detail & Related papers (2023-06-12T06:52:04Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Reparameterized Variational Divergence Minimization for Stable Imitation [57.06909373038396]
We study the extent to which variations in the choice of probabilistic divergence may yield more performant ILO algorithms.
We contribute a re parameterization trick for adversarial imitation learning to alleviate the challenges of the promising $f$-divergence minimization framework.
Empirically, we demonstrate that our design choices allow for ILO algorithms that outperform baseline approaches and more closely match expert performance in low-dimensional continuous-control tasks.
arXiv Detail & Related papers (2020-06-18T19:04:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.