FLIPS: Federated Learning using Intelligent Participant Selection
- URL: http://arxiv.org/abs/2308.03901v2
- Date: Sat, 30 Sep 2023 04:50:40 GMT
- Title: FLIPS: Federated Learning using Intelligent Participant Selection
- Authors: Rahul Atul Bhope, K. R. Jayaram, Nalini Venkatasubramanian, Ashish
Verma, Gegi Thomas
- Abstract summary: FLIPS clusters parties involved in an FL training job based on the label distribution of their data apriori, and during FL training, ensures that each cluster is equitably represented in the participants selected.
We demonstrate that FLIPS significantly improves convergence, achieving higher accuracy by 17 - 20 % with 20 - 60 % lower communication costs, and these benefits endure in the presence of straggler participants.
- Score: 4.395908640091141
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents the design and implementation of FLIPS, a middleware
system to manage data and participant heterogeneity in federated learning (FL)
training workloads. In particular, we examine the benefits of label
distribution clustering on participant selection in federated learning. FLIPS
clusters parties involved in an FL training job based on the label distribution
of their data apriori, and during FL training, ensures that each cluster is
equitably represented in the participants selected. FLIPS can support the most
common FL algorithms, including FedAvg, FedProx, FedDyn, FedOpt and FedYogi. To
manage platform heterogeneity and dynamic resource availability, FLIPS
incorporates a straggler management mechanism to handle changing capacities in
distributed, smart community applications. Privacy of label distributions,
clustering and participant selection is ensured through a trusted execution
environment (TEE). Our comprehensive empirical evaluation compares FLIPS with
random participant selection, as well as three other "smart" selection
mechanisms - Oort, TiFL and gradient clustering using two real-world datasets,
two benchmark datasets, two different non-IID distributions and three common FL
algorithms (FedYogi, FedProx and FedAvg). We demonstrate that FLIPS
significantly improves convergence, achieving higher accuracy by 17 - 20 % with
20 - 60 % lower communication costs, and these benefits endure in the presence
of straggler participants.
Related papers
- FedLPS: Heterogeneous Federated Learning for Multiple Tasks with Local
Parameter Sharing [14.938531944702193]
We propose Federated Learning with Local Heterogeneous Sharing (FedLPS)
FedLPS uses transfer learning to facilitate the deployment of multiple tasks on a single device by dividing the local model into a shareable encoder and task-specific encoders.
FedLPS significantly outperforms the state-of-the-art (SOTA) FL frameworks by up to 4.88% and reduces the computational resource consumption by 21.3%.
arXiv Detail & Related papers (2024-02-13T16:30:30Z) - DPP-based Client Selection for Federated Learning with Non-IID Data [97.1195165400568]
This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL)
We first analyze the effect of CS in FL and show that FL training can be accelerated by adequately choosing participants to diversify the training dataset in each round of training.
We leverage data profiling and determinantal point process (DPP) sampling techniques to develop an algorithm termed Federated Learning with DPP-based Participant Selection (FL-DP$3$S)
arXiv Detail & Related papers (2023-03-30T13:14:54Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - A Survey on Participant Selection for Federated Learning in Mobile
Networks [47.88372677863646]
Federated Learning (FL) is an efficient distributed machine learning paradigm that employs private datasets in a privacy-preserving manner.
Due to limited communication bandwidth and unstable availability of such devices in a mobile network, only a fraction of end devices can be selected in each round.
arXiv Detail & Related papers (2022-07-08T04:22:48Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - CoFED: Cross-silo Heterogeneous Federated Multi-task Learning via
Co-training [11.198612582299813]
Federated Learning (FL) is a machine learning technique that enables participants to train high-quality models collaboratively without exchanging their private data.
We propose a communication-efficient FL scheme, CoFED, based on pseudo-labeling unlabeled data like co-training.
Experimental results show that CoFED achieves better performance with a lower communication cost.
arXiv Detail & Related papers (2022-02-17T11:34:20Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - Flexible Clustered Federated Learning for Client-Level Data Distribution
Shift [13.759582953827229]
We propose a flexible clustered federated learning (CFL) framework named FlexCFL.
We show that FlexCFL can significantly improve absolute test accuracy by +10.6% on FEMNIST compared to FedAvg.
We also evaluate FlexCFL on several open datasets and made comparisons with related CFL frameworks.
arXiv Detail & Related papers (2021-08-22T15:11:39Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z) - FedGroup: Efficient Clustered Federated Learning via Decomposed
Data-Driven Measure [18.083188787905083]
We propose a novel clustered federated learning (CFL) framework FedGroup.
We show that FedGroup can significantly improve absolute test accuracy by +14.1% on FEMNIST compared to FedAvg.
We also evaluate FedGroup and FedGrouProx (combined with FedProx) on several open datasets.
arXiv Detail & Related papers (2020-10-14T08:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.