Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
- URL: http://arxiv.org/abs/2112.14332v5
- Date: Thu, 30 Jan 2025 00:04:31 GMT
- Title: Adaptive Client Sampling in Federated Learning via Online Learning with Bandit Feedback
- Authors: Boxin Zhao, Lingxiao Wang, Ziqi Liu, Zhiqiang Zhang, Jun Zhou, Chaochao Chen, Mladen Kolar,
- Abstract summary: Client sampling plays an important role in federated learning (FL) systems as it affects the convergence rate of optimization algorithms.
We propose an online mirror descent (OSMD) algorithm designed to minimize the sampling variance.
We show how our sampling method can improve the convergence speed of federated optimization algorithms over the widely used uniform sampling.
- Score: 31.826205004616227
- License:
- Abstract: Due to the high cost of communication, federated learning (FL) systems need to sample a subset of clients that are involved in each round of training. As a result, client sampling plays an important role in FL systems as it affects the convergence rate of optimization algorithms used to train machine learning models. Despite its importance, there is limited work on how to sample clients effectively. In this paper, we cast client sampling as an online learning task with bandit feedback, which we solve with an online stochastic mirror descent (OSMD) algorithm designed to minimize the sampling variance. We then theoretically show how our sampling method can improve the convergence speed of federated optimization algorithms over the widely used uniform sampling. Through both simulated and real data experiments, we empirically illustrate the advantages of the proposed client sampling algorithm over uniform sampling and existing online learning-based sampling strategies. The proposed adaptive sampling procedure is applicable beyond the FL problem studied here and can be used to improve the performance of stochastic optimization procedures such as stochastic gradient descent and stochastic coordinate descent.
Related papers
- Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Adaptive teachers for amortized samplers [76.88721198565861]
Amortized inference is the task of training a parametric model, such as a neural network, to approximate a distribution with a given unnormalized density where exact sampling is intractable.
Off-policy RL training facilitates the discovery of diverse, high-reward candidates, but existing methods still face challenges in efficient exploration.
We propose an adaptive training distribution (the Teacher) to guide the training of the primary amortized sampler (the Student) by prioritizing high-loss regions.
arXiv Detail & Related papers (2024-10-02T11:33:13Z) - Adaptive Federated Learning in Heterogeneous Wireless Networks with Independent Sampling [15.027267764009052]
Federated Learning (FL) algorithms sample a random subset of clients to address the straggler issue and improve communication efficiency.
Recent have proposed various client sampling methods, but they have limitations in joint system and data heterogeneity.
We propose a new independent client sampling strategy to minimize the wall-clock time of FL.
arXiv Detail & Related papers (2024-02-15T16:51:38Z) - Enhanced Federated Optimization: Adaptive Unbiased Client Sampling with Reduced Variance [37.646655530394604]
Federated Learning (FL) is a distributed learning paradigm to train a global model across multiple devices without collecting local data.
We present the first adaptive client sampler, K-Vib, employing an independent sampling procedure.
K-Vib achieves a linear speed-up on the regret bound $tildemathcalObig(Nfrac13Tfrac23/Kfrac43big)$ within a set communication budget.
arXiv Detail & Related papers (2023-10-04T10:08:01Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Sampling Through the Lens of Sequential Decision Making [9.101505546901999]
We propose a reward-guided sampling strategy called Adaptive Sample with Reward (ASR)
Our approach optimally adjusts the sampling process to achieve optimal performance.
Empirical results in information retrieval and clustering demonstrate ASR's superb performance across different datasets.
arXiv Detail & Related papers (2022-08-17T04:01:29Z) - Tackling System and Statistical Heterogeneity for Federated Learning
with Adaptive Client Sampling [34.187387951367526]
Federated learning (FL) algorithms usually sample a fraction in each (partial participation) when the number of participants is large.
Recent works have focused on the convergence analysis of FL.
We obtain new convergence bound for FL algorithms with arbitrary client sampling probabilities.
arXiv Detail & Related papers (2021-12-21T14:28:40Z) - Clustered Sampling: Low-Variance and Improved Representativity for
Clients Selection in Federated Learning [4.530678016396477]
This work addresses the problem of optimizing communications between server and clients in federated learning (FL)
Current sampling approaches in FL are either biased, or non optimal in terms of server-clients communications and training stability.
We prove that clustered sampling leads to better clients representatitivity and to reduced variance of the clients aggregation weights in FL.
arXiv Detail & Related papers (2021-05-12T18:19:20Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Optimal Importance Sampling for Federated Learning [57.14673504239551]
Federated learning involves a mixture of centralized and decentralized processing tasks.
The sampling of both agents and data is generally uniform; however, in this work we consider non-uniform sampling.
We derive optimal importance sampling strategies for both agent and data selection and show that non-uniform sampling without replacement improves the performance of the original FedAvg algorithm.
arXiv Detail & Related papers (2020-10-26T14:15:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.