FedLesScan: Mitigating Stragglers in Serverless Federated Learning
- URL: http://arxiv.org/abs/2211.05739v1
- Date: Thu, 10 Nov 2022 18:17:41 GMT
- Title: FedLesScan: Mitigating Stragglers in Serverless Federated Learning
- Authors: Mohamed Elzohairy, Mohak Chadha, Anshul Jindal, Andreas Grafberger,
Jianfeng Gu, Michael Gerndt, Osama Abboud
- Abstract summary: Federated Learning (FL) is a machine learning paradigm that enables the training of a shared global model across distributed clients.
We propose FedLesScan, a novel clustering-based semi-asynchronous training strategy specifically tailored for serverless FL.
We show that FedLesScan reduces training time and cost by an average of 8% and 20% respectively while utilizing clients better with an average increase in the effective update ratio of 17.75%.
- Score: 0.7388859384645262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a machine learning paradigm that enables the
training of a shared global model across distributed clients while keeping the
training data local. While most prior work on designing systems for FL has
focused on using stateful always running components, recent work has shown that
components in an FL system can greatly benefit from the usage of serverless
computing and Function-as-a-Service technologies. To this end, distributed
training of models with severless FL systems can be more resource-efficient and
cheaper than conventional FL systems. However, serverless FL systems still
suffer from the presence of stragglers, i.e., slow clients due to their
resource and statistical heterogeneity. While several strategies have been
proposed for mitigating stragglers in FL, most methodologies do not account for
the particular characteristics of serverless environments, i.e., cold-starts,
performance variations, and the ephemeral stateless nature of the function
instances. Towards this, we propose FedLesScan, a novel clustering-based
semi-asynchronous training strategy, specifically tailored for serverless FL.
FedLesScan dynamically adapts to the behaviour of clients and minimizes the
effect of stragglers on the overall system. We implement our strategy by
extending an open-source serverless FL system called FedLess. Moreover, we
comprehensively evaluate our strategy using the 2nd generation Google Cloud
Functions with four datasets and varying percentages of stragglers. Results
from our experiments show that compared to other approaches FedLesScan reduces
training time and cost by an average of 8% and 20% respectively while utilizing
clients better with an average increase in the effective update ratio of
17.75%.
Related papers
- Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training [21.89214794178211]
In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space.
We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training.
Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods.
arXiv Detail & Related papers (2024-06-21T13:19:29Z) - Apodotiko: Enabling Efficient Serverless Federated Learning in Heterogeneous Environments [40.06788591656159]
Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients.
We present Apodotiko, a novel asynchronous training strategy designed for serverless FL.
Our strategy incorporates a scoring mechanism that evaluates each client's hardware capacity and dataset size to intelligently prioritize and select clients for each training round.
arXiv Detail & Related papers (2024-04-22T09:50:11Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Training Heterogeneous Client Models using Knowledge Distillation in
Serverless Federated Learning [0.5510212613486574]
Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients.
Recent works on designing systems for efficient FL have shown that utilizing serverless computing technologies can enhance resource efficiency, reduce training costs, and alleviate the complex infrastructure management burden on data holders.
arXiv Detail & Related papers (2024-02-11T20:15:52Z) - FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - FedFog: Network-Aware Optimization of Federated Learning over Wireless
Fog-Cloud Systems [40.421253127588244]
Federated learning (FL) is capable of performing large distributed machine learning tasks across multiple edge users by periodically aggregating trained local parameters.
We first propose an efficient FL algorithm (called FedFog) to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud.
arXiv Detail & Related papers (2021-07-04T08:03:15Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.