Resource-Aware Asynchronous Online Federated Learning for Nonlinear
Regression
- URL: http://arxiv.org/abs/2111.13931v1
- Date: Sat, 27 Nov 2021 16:41:30 GMT
- Title: Resource-Aware Asynchronous Online Federated Learning for Nonlinear
Regression
- Authors: Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner,
Yih-Fang Huang, Anthony Kuh
- Abstract summary: asynchronous online federated learning (ASO-Fed)
We use the principles of partial-sharing-based communication to reduce the communication overhead associated with ASO-Fed.
In the asynchronous setting, it is possible to achieve the same convergence as the federated gradient (Online-FedSGD)
- Score: 5.194557636096977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many assumptions in the federated learning literature present a best-case
scenario that can not be satisfied in most real-world applications. An
asynchronous setting reflects the realistic environment in which federated
learning methods must be able to operate reliably. Besides varying amounts of
non-IID data at participants, the asynchronous setting models heterogeneous
client participation due to available computational power and battery
constraints and also accounts for delayed communications between clients and
the server. To reduce the communication overhead associated with asynchronous
online federated learning (ASO-Fed), we use the principles of
partial-sharing-based communication. In this manner, we reduce the
communication load of the participants and, therefore, render participation in
the learning task more accessible. We prove the convergence of the proposed
ASO-Fed and provide simulations to analyze its behavior further. The
simulations reveal that, in the asynchronous setting, it is possible to achieve
the same convergence as the federated stochastic gradient (Online-FedSGD) while
reducing the communication tenfold.
Related papers
- Federated Learning based on Pruning and Recovery [0.0]
This framework integrates asynchronous learning algorithms and pruning techniques.
It addresses the inefficiencies of traditional federated learning algorithms in scenarios involving heterogeneous devices.
It also tackles the staleness issue and inadequate training of certain clients in asynchronous algorithms.
arXiv Detail & Related papers (2024-03-16T14:35:03Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Asynchronous Online Federated Learning with Reduced Communication
Requirements [6.282767337715445]
We propose a communication-efficient asynchronous online federated learning (PAO-Fed) strategy.
By reducing the communication overhead of the participants, the proposed method renders participation in the learning task more accessible and efficient.
We conduct comprehensive simulations to study the performance of the proposed method on both synthetic and real-life datasets.
arXiv Detail & Related papers (2023-03-27T14:06:05Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Communication-Efficient Adaptive Federated Learning [17.721884358895686]
Federated learning is a machine learning paradigm that enables clients to jointly train models without sharing their own localized data.
The implementation of federated learning in practice still faces numerous challenges, such as the large communication overhead.
We propose a novel communication-efficient adaptive learning method (FedCAMS) with theoretical convergence guarantees.
arXiv Detail & Related papers (2022-05-05T15:47:04Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.