Fast-Convergent Federated Learning
- URL: http://arxiv.org/abs/2007.13137v2
- Date: Sat, 31 Oct 2020 11:21:13 GMT
- Title: Fast-Convergent Federated Learning
- Authors: Hung T. Nguyen, Vikash Sehwag, Seyyedali Hosseinalipour, Christopher
G. Brinton, Mung Chiang, H. Vincent Poor
- Abstract summary: Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
- Score: 82.32029953209542
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning has emerged recently as a promising solution for
distributing machine learning tasks through modern networks of mobile devices.
Recent studies have obtained lower bounds on the expected decrease in model
loss that is achieved through each round of federated learning. However,
convergence generally requires a large number of communication rounds, which
induces delay in model training and is costly in terms of network resources. In
this paper, we propose a fast-convergent federated learning algorithm, called
FOLB, which performs intelligent sampling of devices in each round of model
training to optimize the expected convergence speed. We first theoretically
characterize a lower bound on improvement that can be obtained in each round if
devices are selected according to the expected improvement their local models
will provide to the current global model. Then, we show that FOLB obtains this
bound through uniform sampling by weighting device updates according to their
gradient information. FOLB is able to handle both communication and computation
heterogeneity of devices by adapting the aggregations according to estimates of
device's capabilities of contributing to the updates. We evaluate FOLB in
comparison with existing federated learning algorithms and experimentally show
its improvement in trained model accuracy, convergence speed, and/or model
stability across various machine learning tasks and datasets.
Related papers
- Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Stochastic Coded Federated Learning: Theoretical Analysis and Incentive
Mechanism Design [18.675244280002428]
We propose a novel FL framework named coded federated learning (SCFL) that leverages coded computing techniques.
In SCFL, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding noise to the projected local dataset.
We show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods.
arXiv Detail & Related papers (2022-11-08T09:58:36Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Federated Learning with Communication Delay in Edge Networks [5.500965885412937]
Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks.
This work addresses an important consideration of federated learning at the network edge: communication delays between the edge nodes and the aggregator.
A technique called FedDelAvg (federated delayed averaging) is developed, which generalizes the standard federated averaging algorithm to incorporate a weighting between the current local model and the delayed global model received at each device during the synchronization step.
arXiv Detail & Related papers (2020-08-21T06:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.