Efficient and Light-Weight Federated Learning via Asynchronous
Distributed Dropout
- URL: http://arxiv.org/abs/2210.16105v1
- Date: Fri, 28 Oct 2022 13:00:29 GMT
- Title: Efficient and Light-Weight Federated Learning via Asynchronous
Distributed Dropout
- Authors: Chen Dun, Mirian Hipolito, Chris Jermaine, Dimitrios Dimitriadis,
Anastasios Kyrillidis
- Abstract summary: Asynchronous learning protocols have regained attention lately, especially in the Federated Learning (FL) setup.
We propose textttAsyncDrop, a novel asynchronous FL framework that utilizes dropout regularization to handle device heterogeneity in distributed settings.
Overall, textttAsyncDrop achieves better performance compared to state of the art asynchronous methodologies.
- Score: 22.584080337157168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Asynchronous learning protocols have regained attention lately, especially in
the Federated Learning (FL) setup, where slower clients can severely impede the
learning process. Herein, we propose \texttt{AsyncDrop}, a novel asynchronous
FL framework that utilizes dropout regularization to handle device
heterogeneity in distributed settings. Overall, \texttt{AsyncDrop} achieves
better performance compared to state of the art asynchronous methodologies,
while resulting in less communication and training time overheads. The key idea
revolves around creating ``submodels'' out of the global model, and
distributing their training to workers, based on device heterogeneity. We
rigorously justify that such an approach can be theoretically characterized. We
implement our approach and compare it against other asynchronous baselines,
both by design and by adapting existing synchronous FL algorithms to
asynchronous scenarios. Empirically, \texttt{AsyncDrop} reduces the
communication cost and training time, while matching or improving the final
test accuracy in diverse non-i.i.d. FL scenarios.
Related papers
- FADAS: Towards Federated Adaptive Asynchronous Optimization [56.09666452175333]
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
This paper introduces federated adaptive asynchronous optimization, named FADAS, a novel method that incorporates asynchronous updates into adaptive federated optimization with provable guarantees.
We rigorously establish the convergence rate of the proposed algorithms and empirical results demonstrate the superior performance of FADAS over other asynchronous FL baselines.
arXiv Detail & Related papers (2024-07-25T20:02:57Z) - FedAST: Federated Asynchronous Simultaneous Training [27.492821176616815]
Federated Learning (FL) enables devices or clients to collaboratively train machine learning (ML) models without sharing their private data.
Much of the existing work in FL focuses on efficiently learning a model for a single task.
In this paper, we propose simultaneous training of multiple FL models using a common set of datasets.
arXiv Detail & Related papers (2024-06-01T05:14:20Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Federated Learning based on Pruning and Recovery [0.0]
This framework integrates asynchronous learning algorithms and pruning techniques.
It addresses the inefficiencies of traditional federated learning algorithms in scenarios involving heterogeneous devices.
It also tackles the staleness issue and inadequate training of certain clients in asynchronous algorithms.
arXiv Detail & Related papers (2024-03-16T14:35:03Z) - AEDFL: Efficient Asynchronous Decentralized Federated Learning with
Heterogeneous Devices [61.66943750584406]
We propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments.
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy.
Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation.
arXiv Detail & Related papers (2023-12-18T05:18:17Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - Blockchain-enabled Server-less Federated Learning [5.065631761462706]
We focus on an asynchronous server-less Federated Learning solution empowered by (BC) technology.
In contrast to mostly adopted FL approaches, we advocate an asynchronous method whereby model aggregation is done as clients submit their local updates.
arXiv Detail & Related papers (2021-12-15T07:41:23Z) - Device Scheduling and Update Aggregation Policies for Asynchronous
Federated Learning [72.78668894576515]
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework.
We propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems.
arXiv Detail & Related papers (2021-07-23T18:57:08Z) - Stragglers Are Not Disaster: A Hybrid Federated Learning Algorithm with
Delayed Gradients [21.63719641718363]
Federated learning (FL) is a new machine learning framework which trains a joint model across a large amount of decentralized computing devices.
This paper presents a novel FL algorithm, namely Hybrid Federated Learning (HFL), to achieve a learning balance in efficiency and effectiveness.
arXiv Detail & Related papers (2021-02-12T02:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.