Fast Heterogeneous Federated Learning with Hybrid Client Selection
- URL: http://arxiv.org/abs/2208.05135v1
- Date: Wed, 10 Aug 2022 04:10:24 GMT
- Title: Fast Heterogeneous Federated Learning with Hybrid Client Selection
- Authors: Guangyuan Shen, Dehong Gao, DuanXiao Song, libin yang, Xukai Zhou,
Shirui Pan, Wei Lou, Fang Zhou
- Abstract summary: We present a novel clustering-based client selection scheme to accelerate the convergence by variance reduction.
We also present the tighter convergence guarantee of the proposed scheme thanks to the variance reduction.
- Score: 29.902439962022868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Client selection schemes are widely adopted to handle the
communication-efficient problems in recent studies of Federated Learning (FL).
However, the large variance of the model updates aggregated from the
randomly-selected unrepresentative subsets directly slows the FL convergence.
We present a novel clustering-based client selection scheme to accelerate the
FL convergence by variance reduction. Simple yet effective schemes are designed
to improve the clustering effect and control the effect fluctuation, therefore,
generating the client subset with certain representativeness of sampling.
Theoretically, we demonstrate the improvement of the proposed scheme in
variance reduction. We also present the tighter convergence guarantee of the
proposed method thanks to the variance reduction. Experimental results confirm
the exceed efficiency of our scheme compared to alternatives.
Related papers
- FADAS: Towards Federated Adaptive Asynchronous Optimization [56.09666452175333]
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
This paper introduces federated adaptive asynchronous optimization, named FADAS, a novel method that incorporates asynchronous updates into adaptive federated optimization with provable guarantees.
We rigorously establish the convergence rate of the proposed algorithms and empirical results demonstrate the superior performance of FADAS over other asynchronous FL baselines.
arXiv Detail & Related papers (2024-07-25T20:02:57Z) - Depersonalized Federated Learning: Tackling Statistical Heterogeneity by
Alternating Stochastic Gradient Descent [6.394263208820851]
Federated learning (FL) enables devices to train a common machine learning (ML) model for intelligent inference without data sharing.
Raw data held by various cooperativelyicipators are always non-identically distributedly.
We propose a new FL that can significantly statistical optimize by the de-speed of this process.
arXiv Detail & Related papers (2022-10-07T10:30:39Z) - DELTA: Diverse Client Sampling for Fasting Federated Learning [9.45219058010201]
Partial client participation has been widely adopted in Federated Learning (FL) to reduce the communication burden efficiently.
Existing sampling methods are either biased or can be further optimized for faster convergence.
We present DELTA, an unbiased sampling scheme designed to alleviate these issues.
arXiv Detail & Related papers (2022-05-27T12:08:23Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Variance-Reduced Heterogeneous Federated Learning via Stratified Client
Selection [31.401919362978017]
We propose a novel stratified client selection scheme to reduce the variance for the pursuit of better convergence and higher accuracy.
We present an optimized sample size allocation scheme by considering the diversity of stratum's variability.
Experimental results confirm that our approach not only allows for better performance relative to state-of-the-art methods but also is compatible with prevalent FL algorithms.
arXiv Detail & Related papers (2022-01-15T05:41:36Z) - Adaptive Client Sampling in Federated Learning via Online Learning with
Bandit Feedback [36.05851452151107]
federated learning (FL) systems need to sample a subset of clients that are involved in each round of training.
Despite its importance, there is limited work on how to sample clients effectively.
We show how our sampling method can improve the convergence speed of optimization algorithms.
arXiv Detail & Related papers (2021-12-28T23:50:52Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Clustered Sampling: Low-Variance and Improved Representativity for
Clients Selection in Federated Learning [4.530678016396477]
This work addresses the problem of optimizing communications between server and clients in federated learning (FL)
Current sampling approaches in FL are either biased, or non optimal in terms of server-clients communications and training stability.
We prove that clustered sampling leads to better clients representatitivity and to reduced variance of the clients aggregation weights in FL.
arXiv Detail & Related papers (2021-05-12T18:19:20Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z) - Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees [49.91477656517431]
Quantization-based solvers have been widely adopted in Federated Learning (FL)
No existing methods enjoy all the aforementioned properties.
We propose an intuitively-simple yet theoretically-simple method based on SIGNSGD to bridge the gap.
arXiv Detail & Related papers (2020-02-25T15:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.