Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach
- URL: http://arxiv.org/abs/2308.13849v1
- Date: Sat, 26 Aug 2023 11:10:54 GMT
- Title: Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach
- Authors: Jinglong Shen, Xiucheng Wang, Nan Cheng, Longfei Ma, Conghao Zhou,
Yuan Zhang
- Abstract summary: This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
- Score: 16.093068118849246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a promising paradigm federated Learning (FL) is widely used in
privacy-preserving machine learning, which allows distributed devices to
collaboratively train a model while avoiding data transmission among clients.
Despite its immense potential, the FL suffers from bottlenecks in training
speed due to client heterogeneity, leading to escalated training latency and
straggling server aggregation. To deal with this challenge, a novel split
federated learning (SFL) framework that pairs clients with different
computational resources is proposed, where clients are paired based on
computing resources and communication rates among clients, meanwhile the neural
network model is split into two parts at the logical level, and each client
only computes the part assigned to it by using the SL to achieve forward
inference and backward training. Moreover, to effectively deal with the client
pairing problem, a heuristic greedy algorithm is proposed by reconstructing the
optimization of training latency as a graph edge selection problem. Simulation
results show the proposed method can significantly improve the FL training
speed and achieve high performance both in independent identical distribution
(IID) and Non-IID data distribution.
Related papers
- Federated Learning based on Pruning and Recovery [0.0]
This framework integrates asynchronous learning algorithms and pruning techniques.
It addresses the inefficiencies of traditional federated learning algorithms in scenarios involving heterogeneous devices.
It also tackles the staleness issue and inadequate training of certain clients in asynchronous algorithms.
arXiv Detail & Related papers (2024-03-16T14:35:03Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Joint Client Scheduling and Resource Allocation under Channel
Uncertainty in Federated Learning [47.97586668316476]
Federated learning (FL) over wireless networks depends on the reliability of the client-server connectivity and clients' local computation capabilities.
In this article, we investigate the problem of client scheduling and resource block (RB) allocation to enhance the performance of model training using FL.
A proposed method reduces the gap of the training accuracy loss by up to 40.7% compared to state-of-theart client scheduling and RB allocation methods.
arXiv Detail & Related papers (2021-06-12T15:18:48Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Coded Computing for Federated Learning at the Edge [3.385874614913973]
Federated Learning (FL) enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server.
Recent work proposes to mitigate stragglers and speed up training for linear regression tasks by assigning redundant computations at the MEC server.
We develop CodedFedL that addresses the difficult task of extending CFL to distributed non-linear regression and classification problems with multioutput labels.
arXiv Detail & Related papers (2020-07-07T08:20:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.