FedGroup: Efficient Clustered Federated Learning via Decomposed
Data-Driven Measure
- URL: http://arxiv.org/abs/2010.06870v6
- Date: Tue, 27 Jul 2021 06:53:49 GMT
- Title: FedGroup: Efficient Clustered Federated Learning via Decomposed
Data-Driven Measure
- Authors: Moming Duan, Duo Liu, Xinyuan Ji, Renping Liu, Liang Liang, Xianzhang
Chen, Yujuan Tan
- Abstract summary: We propose a novel clustered federated learning (CFL) framework FedGroup.
We show that FedGroup can significantly improve absolute test accuracy by +14.1% on FEMNIST compared to FedAvg.
We also evaluate FedGroup and FedGrouProx (combined with FedProx) on several open datasets.
- Score: 18.083188787905083
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables the multiple participating devices to
collaboratively contribute to a global neural network model while keeping the
training data locally. Unlike the centralized training setting, the non-IID and
imbalanced (statistical heterogeneity) training data of FL is distributed in
the federated network, which will increase the divergences between the local
models and global model, further degrading performance. In this paper, we
propose a novel clustered federated learning (CFL) framework FedGroup, in which
we 1) group the training of clients based on the similarities between the
clients' optimization directions for high training performance; 2) construct a
new data-driven distance measure to improve the efficiency of the client
clustering procedure. 3) implement a newcomer device cold start mechanism based
on the auxiliary global model for framework scalability and practicality.
FedGroup can achieve improvements by dividing joint optimization into groups
of sub-optimization and can be combined with FL optimizer FedProx. The
convergence and complexity are analyzed to demonstrate the efficiency of our
proposed framework. We also evaluate FedGroup and FedGrouProx (combined with
FedProx) on several open datasets and made comparisons with related CFL
frameworks. The results show that FedGroup can significantly improve absolute
test accuracy by +14.1% on FEMNIST compared to FedAvg. +3.4% on Sentiment140
compared to FedProx, +6.9% on MNIST compared to FeSEM.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Leveraging Function Space Aggregation for Federated Learning at Scale [20.866482460590973]
We propose a new algorithm, FedFish, that aggregates local approximations to the functions learned by clients.
We evaluate FedFish on realistic, large-scale cross-device benchmarks.
arXiv Detail & Related papers (2023-11-17T02:37:10Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedCME: Client Matching and Classifier Exchanging to Handle Data
Heterogeneity in Federated Learning [5.21877373352943]
Data heterogeneity across clients is one of the key challenges in Federated Learning (FL)
We propose a novel FL framework named FedCME by client matching and classifier exchanging.
Experimental results demonstrate that FedCME performs better than FedAvg, FedProx, MOON and FedRS on popular federated learning benchmarks.
arXiv Detail & Related papers (2023-07-17T15:40:45Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity [56.82825745165945]
Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
arXiv Detail & Related papers (2022-06-21T17:23:06Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - Flexible Clustered Federated Learning for Client-Level Data Distribution
Shift [13.759582953827229]
We propose a flexible clustered federated learning (CFL) framework named FlexCFL.
We show that FlexCFL can significantly improve absolute test accuracy by +10.6% on FEMNIST compared to FedAvg.
We also evaluate FlexCFL on several open datasets and made comparisons with related CFL frameworks.
arXiv Detail & Related papers (2021-08-22T15:11:39Z) - FedSAE: A Novel Self-Adaptive Federated Learning Framework in
Heterogeneous Systems [14.242716751043533]
Federated Learning (FL) is a novel distributed machine learning which allows thousands of edge devices to train model locally without uploading data concentrically to the server.
We introduce a novel self-adaptive federated framework FedSAE which adjusts the training task of devices automatically and selects participants actively to alleviate the performance degradation.
In our framework, the server evaluates devices' value of training based on their training loss. Then the server selects those clients with bigger value for the global model to reduce communication overhead.
arXiv Detail & Related papers (2021-04-15T15:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.