AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail
Problems
- URL: http://arxiv.org/abs/2206.12169v1
- Date: Fri, 24 Jun 2022 09:13:39 GMT
- Title: AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail
Problems
- Authors: Wenzheng Hou, Qianqian Xu, Zhiyong Yang, Shilong Bao, Yuan He,
Qingming Huang
- Abstract summary: We present an early trial to explore adversarial training methods to optimize AUC.
We reformulate the AUC optimization problem as a saddle point problem, where the objective becomes an instance-wise function.
Our analysis differs from the existing studies since the algorithm is asked to generate adversarial examples by calculating the gradient of a min-max problem.
- Score: 102.95119281306893
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is well-known that deep learning models are vulnerable to adversarial
examples. Existing studies of adversarial training have made great progress
against this challenge. As a typical trait, they often assume that the class
distribution is overall balanced. However, long-tail datasets are ubiquitous in
a wide spectrum of applications, where the amount of head class instances is
larger than the tail classes. Under such a scenario, AUC is a much more
reasonable metric than accuracy since it is insensitive toward class
distribution. Motivated by this, we present an early trial to explore
adversarial training methods to optimize AUC. The main challenge lies in that
the positive and negative examples are tightly coupled in the objective
function. As a direct result, one cannot generate adversarial examples without
a full scan of the dataset. To address this issue, based on a concavity
regularization scheme, we reformulate the AUC optimization problem as a saddle
point problem, where the objective becomes an instance-wise function. This
leads to an end-to-end training protocol. Furthermore, we provide a convergence
guarantee of the proposed algorithm. Our analysis differs from the existing
studies since the algorithm is asked to generate adversarial examples by
calculating the gradient of a min-max problem. Finally, the extensive
experimental results show the performance and robustness of our algorithm in
three long-tail datasets.
Related papers
- Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning [49.417414031031264]
This paper studies learning fair encoders in a self-supervised learning setting.
All data are unlabeled and only a small portion of them are annotated with sensitive attributes.
arXiv Detail & Related papers (2024-06-09T08:11:12Z) - DRAUC: An Instance-wise Distributionally Robust AUC Optimization
Framework [133.26230331320963]
Area Under the ROC Curve (AUC) is a widely employed metric in long-tailed classification scenarios.
We propose an instance-wise surrogate loss of Distributionally Robust AUC (DRAUC) and build our optimization framework on top of it.
arXiv Detail & Related papers (2023-11-06T12:15:57Z) - Neural Collapse Terminus: A Unified Solution for Class Incremental
Learning and Its Variants [166.916517335816]
In this paper, we offer a unified solution to the misalignment dilemma in the three tasks.
We propose neural collapse terminus that is a fixed structure with the maximal equiangular inter-class separation for the whole label space.
Our method holds the neural collapse optimality in an incremental fashion regardless of data imbalance or data scarcity.
arXiv Detail & Related papers (2023-08-03T13:09:59Z) - Pairwise Learning via Stagewise Training in Proximal Setting [0.0]
We combine adaptive sample size and importance sampling techniques for pairwise learning, with convergence guarantees for nonsmooth convex pairwise loss functions.
We demonstrate that sampling opposite instances at each reduces the variance of the gradient, hence accelerating convergence.
arXiv Detail & Related papers (2022-08-08T11:51:01Z) - Task-Agnostic Robust Representation Learning [31.818269301504564]
We study the problem of robust representation learning with unlabeled data in a task-agnostic manner.
We derive an upper bound on the adversarial loss of a prediction model on any downstream task, using its loss on the clean data and a robustness regularizer.
Our method achieves preferable adversarial performance compared to relevant baselines.
arXiv Detail & Related papers (2022-03-15T02:05:11Z) - Relieving Long-tailed Instance Segmentation via Pairwise Class Balance [85.53585498649252]
Long-tailed instance segmentation is a challenging task due to the extreme imbalance of training samples among classes.
It causes severe biases of the head classes (with majority samples) against the tailed ones.
We propose a novel Pairwise Class Balance (PCB) method, built upon a confusion matrix which is updated during training to accumulate the ongoing prediction preferences.
arXiv Detail & Related papers (2022-01-08T07:48:36Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis [36.298869931803836]
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
arXiv Detail & Related papers (2020-09-15T12:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.