Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training
- URL: http://arxiv.org/abs/2201.12976v1
- Date: Mon, 31 Jan 2022 03:15:28 GMT
- Title: Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training
- Authors: Shenglai Zeng, Zonghang Li, Hongfang Yu, Yihong He, Zenglin Xu, Dusit
Niyato, Han Yu
- Abstract summary: Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
- Score: 60.892342868936865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a rapidly growing privacy-preserving collaborative
machine learning paradigm. In practical FL applications, local data from each
data silo reflect local usage patterns. Therefore, there exists heterogeneity
of data distributions among data owners (a.k.a. FL clients). If not handled
properly, this can lead to model performance degradation. This challenge has
inspired the research field of heterogeneous federated learning, which
currently remains open. In this paper, we propose a data heterogeneity-robust
FL approach, FedGSP, to address this challenge by leveraging on a novel concept
of dynamic Sequential-to-Parallel (STP) collaborative training. FedGSP assigns
FL clients to homogeneous groups to minimize the overall distribution
divergence among groups, and increases the degree of parallelism by reassigning
more groups in each round. It is also incorporated with a novel Inter-Cluster
Grouping (ICG) algorithm to assist in group assignment, which uses the centroid
equivalence theorem to simplify the NP-hard grouping problem to make it
solvable. Extensive experiments have been conducted on the non-i.i.d. FEMNIST
dataset. The results show that FedGSP improves the accuracy by 3.7% on average
compared with seven state-of-the-art approaches, and reduces the training time
and communication overhead by more than 90%.
Related papers
- FedEP: Tailoring Attention to Heterogeneous Data Distribution with Entropy Pooling for Decentralized Federated Learning [8.576433180938004]
This paper proposes a novel DFL aggregation algorithm, Federated Entropy Pooling (FedEP)
FedEP mitigates the client drift problem by incorporating the statistical characteristics of local distributions instead of any actual data.
Experiments have demonstrated that FedEP can achieve faster convergence and show higher test performance than state-of-the-art approaches.
arXiv Detail & Related papers (2024-10-10T07:39:15Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for
Non-IID Data in Federated Learning [4.02923738318937]
Uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning.
This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew.
We propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor.
arXiv Detail & Related papers (2022-08-04T04:24:16Z) - Towards Federated Clustering: A Federated Fuzzy $c$-Means Algorithm
(FFCM) [0.0]
Federated Learning (FL) is a setting where multiple parties with distributed data collaborate in training a joint Machine Learning (ML) model.
We describe how this area of research can be of interest to itself, or how it helps addressing issues like non-independently-identically-distributed (i.i.d.) data.
We propose two methods to calculate global cluster centers and evaluate their behaviour through challenging numerical experiments.
arXiv Detail & Related papers (2022-01-18T21:22:28Z) - Robust Convergence in Federated Learning through Label-wise Clustering [6.693651193181458]
Non-IID dataset and heterogeneous environment of the local clients are regarded as a major issue in Federated Learning (FL)
We propose a novel Label-wise clustering algorithm that guarantees the trainability among geographically heterogeneous local clients.
Our paper shows that proposed Label-wise clustering demonstrates prompt and robust convergence compared to other FL algorithms.
arXiv Detail & Related papers (2021-12-28T18:13:09Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z) - FedGroup: Efficient Clustered Federated Learning via Decomposed
Data-Driven Measure [18.083188787905083]
We propose a novel clustered federated learning (CFL) framework FedGroup.
We show that FedGroup can significantly improve absolute test accuracy by +14.1% on FEMNIST compared to FedAvg.
We also evaluate FedGroup and FedGrouProx (combined with FedProx) on several open datasets.
arXiv Detail & Related papers (2020-10-14T08:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.