FedSplitX: Federated Split Learning for Computationally-Constrained
Heterogeneous Clients
- URL: http://arxiv.org/abs/2310.14579v1
- Date: Mon, 23 Oct 2023 05:34:31 GMT
- Title: FedSplitX: Federated Split Learning for Computationally-Constrained
Heterogeneous Clients
- Authors: Jiyun Shin, Jinhyun Ahn, Honggu Kang, Joonhyuk Kang
- Abstract summary: FedSplitX splits a large model into client-side and server-side components at multiple partition points to accommodate diverse client capabilities.
Our experiments demonstrate that FedSplitX effectively utilizes server capabilities to train large models, outperforming baseline approaches.
- Score: 6.21295508577576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundation models (FMs) have demonstrated remarkable performance in machine
learning but demand extensive training data and computational resources.
Federated learning (FL) addresses the challenges posed by FMs, especially
related to data privacy and computational burdens. However, FL on FMs faces
challenges in situations with heterogeneous clients possessing varying
computing capabilities, as clients with limited capabilities may struggle to
train the computationally intensive FMs. To address these challenges, we
propose FedSplitX, a novel FL framework that tackles system heterogeneity.
FedSplitX splits a large model into client-side and server-side components at
multiple partition points to accommodate diverse client capabilities. This
approach enables clients to collaborate while leveraging the server's
computational power, leading to improved model performance compared to
baselines that limit model size to meet the requirement of the poorest client.
Furthermore, FedSplitX incorporates auxiliary networks at each partition point
to reduce communication costs and delays while enhancing model performance. Our
experiments demonstrate that FedSplitX effectively utilizes server capabilities
to train large models, outperforming baseline approaches.
Related papers
- FedMoE-DA: Federated Mixture of Experts via Domain Aware Fine-grained Aggregation [22.281467168796645]
Federated learning (FL) is a collaborative machine learning approach that enables multiple clients to train models without sharing their private data.
We propose FedMoE-DA, a new FL model training framework that incorporates a novel domain-aware, fine-grained aggregation strategy to enhance the robustness, personalizability, and communication efficiency simultaneously.
arXiv Detail & Related papers (2024-11-04T14:29:04Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedCliP: Federated Learning with Client Pruning [3.796320380104124]
Federated learning (FL) is a newly emerging distributed learning paradigm.
One fundamental bottleneck in FL is the heavy communication overheads between the distributed clients and the central server.
We propose FedCliP, the first communication efficient FL training framework from a macro perspective.
arXiv Detail & Related papers (2023-01-17T09:15:37Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - AdaSplit: Adaptive Trade-offs for Resource-constrained Distributed Deep
Learning [18.3841463794885]
Split learning (SL) reduces client compute load by splitting the model training between client and server.
AdaSplit enables efficiently scaling SL to low resource scenarios by reducing bandwidth consumption and improving performance across heterogeneous clients.
arXiv Detail & Related papers (2021-12-02T23:33:15Z) - Personalized Federated Learning via Maximizing Correlation with Sparse
and Hierarchical Extensions [14.862798952297105]
Federated Learning (FL) is a collaborative machine learning technique to train a global model without obtaining clients' private data.
We propose a novel personalized federated learning via maximizing correlation pFedMac.
We show that pFedMac performs better than the L2-norm distance based personalization methods.
arXiv Detail & Related papers (2021-07-12T11:43:40Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.