Sample Selection with Deadline Control for Efficient Federated Learning
on Heterogeneous Clients
- URL: http://arxiv.org/abs/2201.01601v1
- Date: Wed, 5 Jan 2022 13:35:35 GMT
- Title: Sample Selection with Deadline Control for Efficient Federated Learning
on Heterogeneous Clients
- Authors: Jaemin Shin, Yuanchun Li, Yunxin Liu, Sung-Ju Lee
- Abstract summary: Federated Learning (FL) trains a machine learning model on distributed clients without exposing individual data.
We propose FedBalancer, a systematic FL framework that actively selects clients' training samples.
We show that FedBalancer improves the time-to-accuracy performance by 1.224.62x while improving the model accuracy by 1.03.3%.
- Score: 8.350621280672891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) trains a machine learning model on distributed
clients without exposing individual data. Unlike centralized training that is
usually based on carefully-organized data, FL deals with on-device data that
are often unfiltered and imbalanced. As a result, conventional FL training
protocol that treats all data equally leads to a waste of local computational
resources and slows down the global learning process. To this end, we propose
FedBalancer, a systematic FL framework that actively selects clients' training
samples. Our sample selection strategy prioritizes more "informative" data
while respecting privacy and computational capabilities of clients. To better
utilize the sample selection to speed up global training, we further introduce
an adaptive deadline control scheme that predicts the optimal deadline for each
round with varying client train data. Compared with existing FL algorithms with
deadline configuration methods, our evaluation on five datasets from three
different domains shows that FedBalancer improves the time-to-accuracy
performance by 1.22~4.62x while improving the model accuracy by 1.0~3.3%. We
also show that FedBalancer is readily applicable to other FL approaches by
demonstrating that FedBalancer improves the convergence speed and accuracy when
operating jointly with three different FL algorithms.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedFNN: Faster Training Convergence Through Update Predictions in
Federated Recommender Systems [4.4273123155989715]
Federated Learning (FL) has emerged as a key approach for distributed machine learning.
This paper introduces FedFNN, an algorithm that accelerates decentralized model training.
arXiv Detail & Related papers (2023-09-14T13:18:43Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Accelerating Federated Learning with a Global Biased Optimiser [16.69005478209394]
Federated Learning (FL) is a recent development in the field of machine learning that collaboratively trains models without the training data leaving client devices.
We propose a novel, generalised approach for applying adaptive optimisation techniques to FL with the Federated Global Biased Optimiser (FedGBO) algorithm.
FedGBO accelerates FL by applying a set of global biased optimiser values during the local training phase of FL, which helps to reduce client-drift' from non-IID data.
arXiv Detail & Related papers (2021-08-20T12:08:44Z) - Unifying Distillation with Personalization in Federated Learning [1.8262547855491458]
Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data.
In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients.
In this paper, we address this problem with PersFL, a two-stage personalized learning algorithm.
In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from
arXiv Detail & Related papers (2021-05-31T17:54:29Z) - Improving Accuracy of Federated Learning in Non-IID Settings [11.908715869667445]
Federated Learning (FL) is a decentralized machine learning protocol that allows a set of participating agents to collaboratively train a model without sharing their data.
It has been observed that the performance of FL is closely tied with the local data distributions of agents.
In this work, we identify four simple techniques that can improve the performance of trained models without incurring any additional communication overhead to FL.
arXiv Detail & Related papers (2020-10-14T21:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.