Weakly Supervised AUC Optimization: A Unified Partial AUC Approach
- URL: http://arxiv.org/abs/2305.14258v2
- Date: Wed, 27 Mar 2024 05:45:37 GMT
- Title: Weakly Supervised AUC Optimization: A Unified Partial AUC Approach
- Authors: Zheng Xie, Yu Liu, Hao-Yuan He, Ming Li, Zhi-Hua Zhou,
- Abstract summary: We present WSAUC, a unified framework for weakly supervised AUC optimization problems.
We first frame the AUC optimization problems in various weakly supervised scenarios as a common formulation of minimizing the AUC risk on contaminated sets.
We then introduce a new type of partial AUC, specifically, the reversed partial AUC (rpAUC), which serves as a robust training objective for AUC in the presence of contaminated labels.
- Score: 53.59993683627623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since acquiring perfect supervision is usually difficult, real-world machine learning tasks often confront inaccurate, incomplete, or inexact supervision, collectively referred to as weak supervision. In this work, we present WSAUC, a unified framework for weakly supervised AUC optimization problems, which covers noisy label learning, positive-unlabeled learning, multi-instance learning, and semi-supervised learning scenarios. Within the WSAUC framework, we first frame the AUC optimization problems in various weakly supervised scenarios as a common formulation of minimizing the AUC risk on contaminated sets, and demonstrate that the empirical risk minimization problems are consistent with the true AUC. Then, we introduce a new type of partial AUC, specifically, the reversed partial AUC (rpAUC), which serves as a robust training objective for AUC maximization in the presence of contaminated labels. WSAUC offers a universal solution for AUC optimization in various weakly supervised scenarios by maximizing the empirical rpAUC. Theoretical and experimental results under multiple settings support the effectiveness of WSAUC on a range of weakly supervised AUC optimization tasks.
Related papers
- On the Effectiveness of Supervision in Asymmetric Non-Contrastive Learning [5.123232962822044]
asymmetric non-contrastive learning (ANCL) often outperforms its contrastive learning counterpart in self-supervised representation learning.
We study ANCL for supervised representation learning, coined SupSiam and SupBYOL, leveraging labels in ANCL to achieve better representations.
Our analysis reveals that providing supervision to ANCL reduces intra-class variance, and the contribution of supervision should be adjusted to achieve the best performance.
arXiv Detail & Related papers (2024-06-16T06:43:15Z) - DRAUC: An Instance-wise Distributionally Robust AUC Optimization
Framework [133.26230331320963]
Area Under the ROC Curve (AUC) is a widely employed metric in long-tailed classification scenarios.
We propose an instance-wise surrogate loss of Distributionally Robust AUC (DRAUC) and build our optimization framework on top of it.
arXiv Detail & Related papers (2023-11-06T12:15:57Z) - AUC Optimization from Multiple Unlabeled Datasets [14.318887072787938]
We propose U$m$-AUC, an AUC optimization approach that converts the U$m$ data into a multi-label AUC optimization problem.
We show that the proposed U$m$-AUC is effective theoretically and empirically.
arXiv Detail & Related papers (2023-05-25T06:43:42Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Minimax AUC Fairness: Efficient Algorithm with Provable Convergence [35.045187964671335]
We propose a minimax learning and bias mitigation framework that incorporates both intra-group and inter-group AUCs while maintaining utility.
Based on this framework, we design an efficient optimization algorithm and prove its convergence to the minimum group-level AUC.
arXiv Detail & Related papers (2022-08-22T17:11:45Z) - Balanced Self-Paced Learning for AUC Maximization [88.53174245457268]
Existing self-paced methods are limited to pointwise AUC.
Our algorithm converges to a stationary point on the basis of closed-form solutions.
arXiv Detail & Related papers (2022-07-08T02:09:32Z) - AUC Maximization in the Era of Big Data and AI: A Survey [64.50025542570235]
Area under ROC curve, a.k.a. AUC, is a measure of choice for assessing performance of a ford data imbalance.
AUC refers to a learning paradigm that learns a predictive model by directly maximizing its AUC score.
arXiv Detail & Related papers (2022-03-28T19:24:05Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.