Coded Computing for Federated Learning at the Edge
- URL: http://arxiv.org/abs/2007.03273v3
- Date: Sun, 9 May 2021 20:09:36 GMT
- Title: Coded Computing for Federated Learning at the Edge
- Authors: Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, A. Salman Avestimehr,
Nageen Himayat
- Abstract summary: Federated Learning (FL) enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server.
Recent work proposes to mitigate stragglers and speed up training for linear regression tasks by assigning redundant computations at the MEC server.
We develop CodedFedL that addresses the difficult task of extending CFL to distributed non-linear regression and classification problems with multioutput labels.
- Score: 3.385874614913973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an exciting new paradigm that enables training a
global model from data generated locally at the client nodes, without moving
client data to a centralized server. Performance of FL in a multi-access edge
computing (MEC) network suffers from slow convergence due to heterogeneity and
stochastic fluctuations in compute power and communication link qualities
across clients. A recent work, Coded Federated Learning (CFL), proposes to
mitigate stragglers and speed up training for linear regression tasks by
assigning redundant computations at the MEC server. Coding redundancy in CFL is
computed by exploiting statistical properties of compute and communication
delays. We develop CodedFedL that addresses the difficult task of extending CFL
to distributed non-linear regression and classification problems with
multioutput labels. The key innovation of our work is to exploit distributed
kernel embedding using random Fourier features that transforms the training
task into distributed linear regression. We provide an analytical solution for
load allocation, and demonstrate significant performance gains for CodedFedL
through experiments over benchmark datasets using practical network parameters.
Related papers
- TurboSVM-FL: Boosting Federated Learning through SVM Aggregation for
Lazy Clients [44.44776028287441]
TurboSVM-FL is a novel federated aggregation strategy that poses no additional computation burden on the client side.
We evaluate TurboSVM-FL on multiple datasets including FEMNIST, CelebA, and Shakespeare.
arXiv Detail & Related papers (2024-01-22T14:59:11Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Fast-Convergent Federated Learning via Cyclic Aggregation [10.658882342481542]
Federated learning (FL) aims at optimizing a shared global model over multiple edge devices without transmitting (private) data to the central server.
This paper utilizes cyclic learning rate at the server side to reduce the number of training iterations with increased performance.
Numerical results validate that, simply plugging-in the proposed cyclic aggregation to the existing FL algorithms effectively reduces the number of training iterations with improved performance.
arXiv Detail & Related papers (2022-10-29T07:20:59Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Joint Client Scheduling and Resource Allocation under Channel
Uncertainty in Federated Learning [47.97586668316476]
Federated learning (FL) over wireless networks depends on the reliability of the client-server connectivity and clients' local computation capabilities.
In this article, we investigate the problem of client scheduling and resource block (RB) allocation to enhance the performance of model training using FL.
A proposed method reduces the gap of the training accuracy loss by up to 40.7% compared to state-of-theart client scheduling and RB allocation methods.
arXiv Detail & Related papers (2021-06-12T15:18:48Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Coded Computing for Low-Latency Federated Learning over Wireless Edge
Networks [10.395838711844892]
Federated learning enables training a global model from data located at the client nodes, without data sharing and moving client data to a centralized server.
We propose a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure.
arXiv Detail & Related papers (2020-11-12T06:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.