FedPrune: Towards Inclusive Federated Learning
- URL: http://arxiv.org/abs/2110.14205v1
- Date: Wed, 27 Oct 2021 06:33:38 GMT
- Title: FedPrune: Towards Inclusive Federated Learning
- Authors: Muhammad Tahir Munir, Muhammad Mustansar Saeed, Mahad Ali, Zafar Ayyub
Qazi, Ihsan Ayyub Qazi
- Abstract summary: Federated learning (FL) is a distributed learning technique that trains a shared model over distributed data in a privacy-preserving manner.
We propose FedPrune; a system that tackles this challenge by pruning the global model for slow clients based on their device characteristics.
By using insights from Central Limit Theorem, FedPrune incorporates a new aggregation technique that achieves robust performance over non-IID data.
- Score: 1.308951527147782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a distributed learning technique that trains a
shared model over distributed data in a privacy-preserving manner.
Unfortunately, FL's performance degrades when there is (i) variability in
client characteristics in terms of computational and memory resources (system
heterogeneity) and (ii) non-IID data distribution across clients (statistical
heterogeneity). For example, slow clients get dropped in FL schemes, such as
Federated Averaging (FedAvg), which not only limits overall learning but also
biases results towards fast clients. We propose FedPrune; a system that tackles
this challenge by pruning the global model for slow clients based on their
device characteristics. By doing so, slow clients can train a small model
quickly and participate in FL which increases test accuracy as well as
fairness. By using insights from Central Limit Theorem, FedPrune incorporates a
new aggregation technique that achieves robust performance over non-IID data.
Experimental evaluation shows that Fed- Prune provides robust convergence and
better fairness compared to Federated Averaging.
Related papers
- FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation [32.305134875959226]
Federated learning (FL) is a privacy-preserving paradigm that enables distributed clients to collaboratively train models with a central server.
We propose FedHPL, a parameter-efficient unified $textbfFed$erated learning framework for $textbfH$eterogeneous settings.
We show that our framework outperforms state-of-the-art FL approaches, with less overhead and training rounds.
arXiv Detail & Related papers (2024-05-27T15:25:32Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Combating Client Dropout in Federated Learning via Friend Model
Substitution [8.325089307976654]
Federated learning (FL) is a new distributed machine learning framework known for its benefits on data privacy and communication efficiency.
This paper studies a passive partial client participation scenario that is much less well understood.
We develop a new algorithm FL-FDMS that discovers friends of clients whose data distributions are similar.
Experiments on MNIST and CIFAR-10 confirmed the superior performance of FL-FDMS in handling client dropout in FL.
arXiv Detail & Related papers (2022-05-26T08:34:28Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Dubhe: Towards Data Unbiasedness with Homomorphic Encryption in
Federated Learning Client Selection [16.975086164684882]
Federated learning (FL) is a distributed machine learning paradigm that allows clients to collaboratively train a model over their own local data.
We mathematically demonstrate the cause of performance degradation in FL and examine the performance of FL over various datasets.
We propose a pluggable system-level client selection method named Dubhe, which allows clients to proactively participate in training, preserving their privacy with the assistance of HE.
arXiv Detail & Related papers (2021-09-08T13:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.