Evolutionary Multitasking AUC Optimization
- URL: http://arxiv.org/abs/2201.01145v1
- Date: Tue, 4 Jan 2022 14:14:13 GMT
- Title: Evolutionary Multitasking AUC Optimization
- Authors: Chao Wang, Kai Wu, Jing Liu
- Abstract summary: This paper develops an evolutionary (termed EMT) to make full use of information among the constructed cheap and expensive tasks to obtain higher performance.
The performance of the proposed method is evaluated on binary classification datasets.
- Score: 10.279426529746667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to optimize the area under the receiver operating characteristics
curve (AUC) performance for imbalanced data has attracted much attention in
recent years. Although there have been several methods of AUC optimization,
scaling up AUC optimization is still an open issue due to its pairwise learning
style. Maximizing AUC in the large-scale dataset can be considered as a
non-convex and expensive problem. Inspired by the characteristic of pairwise
learning, the cheap AUC optimization task with a small-scale dataset sampled
from the large-scale dataset is constructed to promote the AUC accuracy of the
original, large-scale, and expensive AUC optimization task. This paper develops
an evolutionary multitasking framework (termed EMTAUC) to make full use of
information among the constructed cheap and expensive tasks to obtain higher
performance. In EMTAUC, one mission is to optimize AUC from the sampled
dataset, and the other is to maximize AUC from the original dataset. Moreover,
due to the cheap task containing limited knowledge, a strategy for dynamically
adjusting the data structure of inexpensive tasks is proposed to introduce more
knowledge into the multitasking AUC optimization environment. The performance
of the proposed method is evaluated on a series of binary classification
datasets. The experimental results demonstrate that EMTAUC is highly
competitive to single task methods and online methods. Supplementary materials
and source code implementation of EMTAUC can be accessed at
https://github.com/xiaofangxd/EMTAUC.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation [88.50256898176269]
We develop a pixel-level AUC loss function and conduct a dependency-graph-based theoretical analysis of the algorithm's generalization ability.
We also design a Tail-Classes Memory Bank to manage the significant memory demand.
arXiv Detail & Related papers (2024-09-30T15:31:02Z) - DRAUC: An Instance-wise Distributionally Robust AUC Optimization
Framework [133.26230331320963]
Area Under the ROC Curve (AUC) is a widely employed metric in long-tailed classification scenarios.
We propose an instance-wise surrogate loss of Distributionally Robust AUC (DRAUC) and build our optimization framework on top of it.
arXiv Detail & Related papers (2023-11-06T12:15:57Z) - A Meta-Learning Based Precoder Optimization Framework for Rate-Splitting
Multiple Access [53.191806757701215]
We propose the use of a meta-learning based precoder optimization framework to directly optimize the Rate-Splitting Multiple Access (RSMA) precoders with partial Channel State Information at the Transmitter (CSIT)
By exploiting the overfitting of the compact neural network to maximize the explicit Average Sum-Rate (ASR) expression, we effectively bypass the need for any other training data while minimizing the total running time.
Numerical results reveal that the meta-learning based solution achieves similar ASR performance to conventional precoder optimization in medium-scale scenarios, and significantly outperforms sub-optimal low complexity precoder algorithms in the large-scale
arXiv Detail & Related papers (2023-07-17T20:31:41Z) - Learning Better with Less: Effective Augmentation for Sample-Efficient
Visual Reinforcement Learning [57.83232242068982]
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
It remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL.
This work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy.
arXiv Detail & Related papers (2023-05-25T15:46:20Z) - AUC Optimization from Multiple Unlabeled Datasets [14.318887072787938]
We propose U$m$-AUC, an AUC optimization approach that converts the U$m$ data into a multi-label AUC optimization problem.
We show that the proposed U$m$-AUC is effective theoretically and empirically.
arXiv Detail & Related papers (2023-05-25T06:43:42Z) - Balanced Self-Paced Learning for AUC Maximization [88.53174245457268]
Existing self-paced methods are limited to pointwise AUC.
Our algorithm converges to a stationary point on the basis of closed-form solutions.
arXiv Detail & Related papers (2022-07-08T02:09:32Z) - Online AUC Optimization for Sparse High-Dimensional Datasets [32.77252579365118]
Area Under the ROC Curve (AUC) is a widely used performance measure for imbalanced classification.
Current online AUC optimization algorithms have high per-iteration cost $mathcalO(d)$.
We propose a new algorithm, textscFTRL-AUC, which can process data in an online fashion with a much cheaper per-iteration cost.
arXiv Detail & Related papers (2020-09-23T00:50:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.