Learning with Multiclass AUC: Theory and Algorithms
- URL: http://arxiv.org/abs/2107.13171v1
- Date: Wed, 28 Jul 2021 05:18:10 GMT
- Title: Learning with Multiclass AUC: Theory and Algorithms
- Authors: Zhiyong Yang, Qianqian Xu, Shilong Bao, Xiaochun Cao, Qingming Huang
- Abstract summary: Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
- Score: 141.63211412386283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Area under the ROC curve (AUC) is a well-known ranking metric for
problems such as imbalanced learning and recommender systems. The vast majority
of existing AUC-optimization-based machine learning methods only focus on
binary-class cases, while leaving the multiclass cases unconsidered. In this
paper, we start an early trial to consider the problem of learning multiclass
scoring functions via optimizing multiclass AUC metrics. Our foundation is
based on the M metric, which is a well-known multiclass extension of AUC. We
first pay a revisit to this metric, showing that it could eliminate the
imbalance issue from the minority class pairs. Motivated by this, we propose an
empirical surrogate risk minimization framework to approximately optimize the M
metric. Theoretically, we show that: (i) optimizing most of the popular
differentiable surrogate losses suffices to reach the Bayes optimal scoring
function asymptotically; (ii) the training framework enjoys an imbalance-aware
generalization error bound, which pays more attention to the bottleneck samples
of minority classes compared with the traditional $O(\sqrt{1/N})$ result.
Practically, to deal with the low scalability of the computational operations,
we propose acceleration methods for three popular surrogate loss functions,
including the exponential loss, squared loss, and hinge loss, to speed up loss
and gradient evaluations. Finally, experimental results on 11 real-world
datasets demonstrate the effectiveness of our proposed framework.
Related papers
- Predictor-Rejector Multi-Class Abstention: Theoretical Analysis and Algorithms [30.389055604165222]
We study the key framework of learning with abstention in the multi-class classification setting.
In this setting, the learner can choose to abstain from making a prediction with some pre-defined cost.
We introduce several new families of surrogate losses for which we prove strong non-asymptotic and hypothesis set-specific consistency guarantees.
arXiv Detail & Related papers (2023-10-23T10:16:27Z) - Majorization-Minimization for sparse SVMs [46.99165837639182]
Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework, several decades ago.
They often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena.
In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization.
arXiv Detail & Related papers (2023-08-31T17:03:16Z) - Learning Prompt-Enhanced Context Features for Weakly-Supervised Video
Anomaly Detection [37.99031842449251]
Video anomaly detection under weak supervision presents significant challenges.
We present a weakly supervised anomaly detection framework that focuses on efficient context modeling and enhanced semantic discriminability.
Our approach significantly improves the detection accuracy of certain anomaly sub-classes, underscoring its practical value and efficacy.
arXiv Detail & Related papers (2023-06-26T06:45:16Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Minimax AUC Fairness: Efficient Algorithm with Provable Convergence [35.045187964671335]
We propose a minimax learning and bias mitigation framework that incorporates both intra-group and inter-group AUCs while maintaining utility.
Based on this framework, we design an efficient optimization algorithm and prove its convergence to the minimum group-level AUC.
arXiv Detail & Related papers (2022-08-22T17:11:45Z) - Balanced Self-Paced Learning for AUC Maximization [88.53174245457268]
Existing self-paced methods are limited to pointwise AUC.
Our algorithm converges to a stationary point on the basis of closed-form solutions.
arXiv Detail & Related papers (2022-07-08T02:09:32Z) - Optimizing Two-way Partial AUC with an End-to-end Framework [154.47590401735323]
Area Under the ROC Curve (AUC) is a crucial metric for machine learning.
Recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics.
We present the first trial in this paper to optimize this new metric.
arXiv Detail & Related papers (2022-06-23T12:21:30Z) - Tight Mutual Information Estimation With Contrastive Fenchel-Legendre
Optimization [69.07420650261649]
We introduce a novel, simple, and powerful contrastive MI estimator named as FLO.
Empirically, our FLO estimator overcomes the limitations of its predecessors and learns more efficiently.
The utility of FLO is verified using an extensive set of benchmarks, which also reveals the trade-offs in practical MI estimation.
arXiv Detail & Related papers (2021-07-02T15:20:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.