Optimizing Two-way Partial AUC with an End-to-end Framework
- URL: http://arxiv.org/abs/2206.11655v1
- Date: Thu, 23 Jun 2022 12:21:30 GMT
- Title: Optimizing Two-way Partial AUC with an End-to-end Framework
- Authors: Zhiyong Yang, Qianqian Xu, Shilong Bao, Yuan He, Xiaochun Cao,
Qingming Huang
- Abstract summary: Area Under the ROC Curve (AUC) is a crucial metric for machine learning.
Recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics.
We present the first trial in this paper to optimize this new metric.
- Score: 154.47590401735323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Area Under the ROC Curve (AUC) is a crucial metric for machine learning,
which evaluates the average performance over all possible True Positive Rates
(TPRs) and False Positive Rates (FPRs). Based on the knowledge that a skillful
classifier should simultaneously embrace a high TPR and a low FPR, we turn to
study a more general variant called Two-way Partial AUC (TPAUC), where only the
region with $\mathsf{TPR} \ge \alpha, \mathsf{FPR} \le \beta$ is included in
the area. Moreover, recent work shows that the TPAUC is essentially
inconsistent with the existing Partial AUC metrics where only the FPR range is
restricted, opening a new problem to seek solutions to leverage high TPAUC.
Motivated by this, we present the first trial in this paper to optimize this
new metric. The critical challenge along this course lies in the difficulty of
performing gradient-based optimization with end-to-end stochastic training,
even with a proper choice of surrogate loss. To address this issue, we propose
a generic framework to construct surrogate optimization problems, which
supports efficient end-to-end training with deep learning. Moreover, our
theoretical analyses show that: 1) the objective function of the surrogate
problems will achieve an upper bound of the original problem under mild
conditions, and 2) optimizing the surrogate problems leads to good
generalization performance in terms of TPAUC with a high probability. Finally,
empirical studies over several benchmark datasets speak to the efficacy of our
framework.
Related papers
- On The Global Convergence Of Online RLHF With Neural Parametrization [36.239015146313136]
Reinforcement Learning from Human Feedback (RLHF) aims to align large language models with human values.
RLHF is a three-stage process that includes supervised fine-tuning, reward learning, and policy learning.
We propose a bi-level formulation for AI alignment in parameterized settings and introduce a first-order approach to solve this problem.
arXiv Detail & Related papers (2024-10-21T03:13:35Z) - Lower-Left Partial AUC: An Effective and Efficient Optimization Metric
for Recommendation [52.45394284415614]
We propose a new optimization metric, Lower-Left Partial AUC (LLPAUC), which is computationally efficient like AUC but strongly correlates with Top-K ranking metrics.
LLPAUC considers only the partial area under the ROC curve in the Lower-Left corner to push the optimization focus on Top-K.
arXiv Detail & Related papers (2024-02-29T13:58:33Z) - PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback [106.63518036538163]
We present a novel unified bilevel optimization-based framework, textsfPARL, formulated to address the recently highlighted critical issue of policy alignment in reinforcement learning.
Our framework addressed these concerns by explicitly parameterizing the distribution of the upper alignment objective (reward design) by the lower optimal variable.
Our empirical results substantiate that the proposed textsfPARL can address the alignment concerns in RL by showing significant improvements.
arXiv Detail & Related papers (2023-08-03T18:03:44Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.