DRAUC: An Instance-wise Distributionally Robust AUC Optimization
Framework
- URL: http://arxiv.org/abs/2311.03055v1
- Date: Mon, 6 Nov 2023 12:15:57 GMT
- Title: DRAUC: An Instance-wise Distributionally Robust AUC Optimization
Framework
- Authors: Siran Dai, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, Qingming Huang
- Abstract summary: Area Under the ROC Curve (AUC) is a widely employed metric in long-tailed classification scenarios.
We propose an instance-wise surrogate loss of Distributionally Robust AUC (DRAUC) and build our optimization framework on top of it.
- Score: 133.26230331320963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Area Under the ROC Curve (AUC) is a widely employed metric in long-tailed
classification scenarios. Nevertheless, most existing methods primarily assume
that training and testing examples are drawn i.i.d. from the same distribution,
which is often unachievable in practice. Distributionally Robust Optimization
(DRO) enhances model performance by optimizing it for the local worst-case
scenario, but directly integrating AUC optimization with DRO results in an
intractable optimization problem. To tackle this challenge, methodically we
propose an instance-wise surrogate loss of Distributionally Robust AUC (DRAUC)
and build our optimization framework on top of it. Moreover, we highlight that
conventional DRAUC may induce label bias, hence introducing distribution-aware
DRAUC as a more suitable metric for robust AUC learning. Theoretically, we
affirm that the generalization gap between the training loss and testing error
diminishes if the training set is sufficiently large. Empirically, experiments
on corrupted benchmark datasets demonstrate the effectiveness of our proposed
method. Code is available at: https://github.com/EldercatSAM/DRAUC.
Related papers
- Distributionally and Adversarially Robust Logistic Regression via Intersecting Wasserstein Balls [8.720733751119994]
Adversarially robust optimization (ARO) has become the de facto standard for training models to defend against adversarial attacks during testing.
Despite their robustness, these models often suffer from severe overfitting.
We propose two approaches to replace the empirical distribution in training with: (i) a worst-case distribution within an ambiguity set; or (ii) a mixture of the empirical distribution with one derived from an auxiliary dataset.
arXiv Detail & Related papers (2024-07-18T15:59:37Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail
Problems [102.95119281306893]
We present an early trial to explore adversarial training methods to optimize AUC.
We reformulate the AUC optimization problem as a saddle point problem, where the objective becomes an instance-wise function.
Our analysis differs from the existing studies since the algorithm is asked to generate adversarial examples by calculating the gradient of a min-max problem.
arXiv Detail & Related papers (2022-06-24T09:13:39Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - A Distributionally Robust Area Under Curve Maximization Model [1.370633147306388]
We propose two new distributionally robust AUC models (DR-AUC)
DR-AUC models rely on the Kantorovich metric and approximate the AUC with the hinge loss function.
numerical experiments show that the proposed DR-AUC models perform better in general and in particular improve the worst-case out-of-sample performance.
arXiv Detail & Related papers (2020-02-18T02:50:45Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.