Communication-Efficient Federated AUC Maximization with Cyclic Client Participation
- URL: http://arxiv.org/abs/2601.01649v1
- Date: Sun, 04 Jan 2026 19:57:41 GMT
- Title: Communication-Efficient Federated AUC Maximization with Cyclic Client Participation
- Authors: Umesh Vangapally, Wenhan Wu, Chen Chen, Zhishuai Guo,
- Abstract summary: AUC is a powerful approach for learning from complexityd data in learning (FL)<n>We consider general pairwise AUC losses and an imbalance of $O (1/3)$ and an iteration of $O (1/4)$.<n>Under the PL condition, these improve to communication complexity of $widetildeO (1/)$.
- Score: 14.535377860461727
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated AUC maximization is a powerful approach for learning from imbalanced data in federated learning (FL). However, existing methods typically assume full client availability, which is rarely practical. In real-world FL systems, clients often participate in a cyclic manner: joining training according to a fixed, repeating schedule. This setting poses unique optimization challenges for the non-decomposable AUC objective. This paper addresses these challenges by developing and analyzing communication-efficient algorithms for federated AUC maximization under cyclic client participation. We investigate two key settings: First, we study AUC maximization with a squared surrogate loss, which reformulates the problem as a nonconvex-strongly-concave minimax optimization. By leveraging the Polyak-Łojasiewicz (PL) condition, we establish a state-of-the-art communication complexity of $\widetilde{O}(1/ε^{1/2})$ and iteration complexity of $\widetilde{O}(1/ε)$. Second, we consider general pairwise AUC losses. We establish a communication complexity of $O(1/ε^3)$ and an iteration complexity of $O(1/ε^4)$. Further, under the PL condition, these bounds improve to communication complexity of $\widetilde{O}(1/ε^{1/2})$ and iteration complexity of $\widetilde{O}(1/ε)$. Extensive experiments on benchmark tasks in image classification, medical imaging, and fraud detection demonstrate the superior efficiency and effectiveness of our proposed methods.
Related papers
- Closing the Approximation Gap of Partial AUC Optimization: A Tale of Two Formulations [121.39938773554523]
The Area Under the ROC Curve (AUC) is a pivotal evaluation metric in real-world scenarios with both class imbalance and decision constraints.<n>We present two simple instance-wise minimax reformulations to close the approximation gap of PAUC optimization.<n>The resulting algorithms enjoy a linear per-iteration computational complexity w.r.t. the sample size and a convergence rate of $O(-2/3)$ for typical one-way and two-way PAUCs.
arXiv Detail & Related papers (2025-12-01T02:52:33Z) - Rate optimal learning of equilibria from data [63.14746189846806]
We close theoretical gaps in Multi-Agent Imitation Learning (MAIL) by characterizing the limits of non-interactive MAIL and presenting the first interactive algorithm with near-optimal sample complexity.<n>For the interactive setting, we introduce a framework that combines reward-free reinforcement learning with interactive MAIL and instantiate it with an algorithm, MAIL-WARM.<n>We provide numerical results that support our theory and illustrate, in environments such as grid worlds, where Behavior Cloning fails to learn.
arXiv Detail & Related papers (2025-10-10T12:28:35Z) - Towards a Sharp Analysis of Offline Policy Learning for $f$-Divergence-Regularized Contextual Bandits [49.96531901205305]
We analyze $f$-divergence-regularized offline policy learning.<n>For reverse Kullback-Leibler (KL) divergence, we give the first $tildeO(epsilon-1)$ sample complexity under single-policy concentrability.<n>We extend our analysis to dueling bandits, and we believe these results take a significant step toward a comprehensive understanding of $f$-divergence-regularized policy learning.
arXiv Detail & Related papers (2025-02-09T22:14:45Z) - AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation [88.50256898176269]
We develop a pixel-level AUC loss function and conduct a dependency-graph-based theoretical analysis of the algorithm's generalization ability.
We also design a Tail-Classes Memory Bank to manage the significant memory demand.
arXiv Detail & Related papers (2024-09-30T15:31:02Z) - Momentum-Based Federated Reinforcement Learning with Interaction and Communication Efficiency [16.002770483584694]
Federated Reinforcement Learning (FRL) has garnered increasing attention.
In this paper, we introduce a new FRL algorithm, named $texttMFPO$.
We prove that by proper selection of momentum parameters and interaction frequency, $texttMFPO$ can achieve $tildemathcalO(H-1Nepsilon-3/2N)$ and $tmathcalO(ilon-1N)$.
arXiv Detail & Related papers (2024-05-24T03:23:37Z) - Solving a Class of Non-Convex Minimax Optimization in Federated Learning [84.98927714326908]
The minimax problems arise throughout machine learning applications, ranging from machine learning training to large-scale learning.
We propose a class of algorithms for non minimax problems (emphi) that reduce complexity to $varepsilon-6)$.
We prove that FedSGDA-M has the best sample complexity of $O(kappa2-3)$ and the best-known communication of $O(kappa2-3)$.
arXiv Detail & Related papers (2023-10-05T15:48:41Z) - Provably Efficient Adversarial Imitation Learning with Unknown
Transitions [24.70187647541753]
Imitation learning (IL) has proven to be an effective method for learning good policies from expert demonstrations.
This paper explores the theoretical underpinnings of AIL in the presence of unknown transitions.
We propose an algorithm, MB-TAIL, that achieves the minimax optimal expert sample complexity of $widetildeO (H3/2 |S|/varepsilon)$ and interaction complexity of $widetildeO (H3 |S|2 |A|/varepsilon2)$.
arXiv Detail & Related papers (2023-06-11T02:46:41Z) - Communication-Efficient Federated Bilevel Optimization with Local and
Global Lower Level Problems [118.00379425831566]
We propose a communication-efficient algorithm, named FedBiOAcc.
We prove that FedBiOAcc-Local converges at the same rate for this type of problems.
Empirical results show superior performance of our algorithms.
arXiv Detail & Related papers (2023-02-13T21:28:53Z) - SAGDA: Achieving $\mathcal{O}(\epsilon^{-2})$ Communication Complexity
in Federated Min-Max Learning [9.001405567602745]
We show that SAGDA achieves a linear speedup in terms of both the number of clients and local update steps.
We also advance the current understanding on communication complexity of the standard FSGDA method for federated min-max learning.
arXiv Detail & Related papers (2022-10-02T20:04:50Z) - INTERACT: Achieving Low Sample and Communication Complexities in
Decentralized Bilevel Learning over Networks [24.02913189682224]
Decentralized bilevel optimization problems have received increasing attention in the networking machine learning communities.
Low sample and communication complexities are two fundamental challenges that remain under-explored.
Our work is the first that both low sample and communication complexities for solving decentralized bilevel optimization problems over networks.
arXiv Detail & Related papers (2022-07-27T04:19:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.