Stochastic Optimization of Areas Under Precision-Recall Curves with
Provable Convergence
- URL: http://arxiv.org/abs/2104.08736v5
- Date: Wed, 12 Apr 2023 22:42:58 GMT
- Title: Stochastic Optimization of Areas Under Precision-Recall Curves with
Provable Convergence
- Authors: Qi Qi, Youzhi Luo, Zhao Xu, Shuiwang Ji, Tianbao Yang
- Abstract summary: Area under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems.
We propose a technical method to optimize AUPRC for deep learning.
- Score: 66.83161885378192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Areas under ROC (AUROC) and precision-recall curves (AUPRC) are common
metrics for evaluating classification performance for imbalanced problems.
Compared with AUROC, AUPRC is a more appropriate metric for highly imbalanced
datasets. While stochastic optimization of AUROC has been studied extensively,
principled stochastic optimization of AUPRC has been rarely explored. In this
work, we propose a principled technical method to optimize AUPRC for deep
learning. Our approach is based on maximizing the averaged precision (AP),
which is an unbiased point estimator of AUPRC. We cast the objective into a sum
of {\it dependent compositional functions} with inner functions dependent on
random variables of the outer level. We propose efficient adaptive and
non-adaptive stochastic algorithms named SOAP with {\it provable convergence
guarantee under mild conditions} by leveraging recent advances in stochastic
compositional optimization. Extensive experimental results on image and graph
datasets demonstrate that our proposed method outperforms prior methods on
imbalanced problems in terms of AUPRC. To the best of our knowledge, our work
represents the first attempt to optimize AUPRC with provable convergence. The
SOAP has been implemented in the libAUC library at~\url{https://libauc.org/}.
Related papers
- Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Smoothing Policy Iteration for Zero-sum Markov Games [9.158672246275348]
We propose the smoothing policy robustness (SPI) algorithm to solve the zero-sum MGs approximately.
Specially, the adversarial policy is served as the weight function to enable an efficient sampling over action spaces.
We also propose a model-based algorithm called Smooth adversarial Actor-critic (SaAC) by extending SPI with the function approximations.
arXiv Detail & Related papers (2022-12-03T14:39:06Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - From Majorization to Interpolation: Distributionally Robust Learning
using Kernel Smoothing [1.2891210250935146]
We study the function approximation aspect of distributionally robust optimization (DRO) based on probability metrics.
This paper instead proposes robust learning algorithms based on smooth function approximation and convolution.
arXiv Detail & Related papers (2021-02-16T22:25:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.