Oort: Efficient Federated Learning via Guided Participant Selection
- URL: http://arxiv.org/abs/2010.06081v3
- Date: Fri, 28 May 2021 00:31:41 GMT
- Title: Oort: Efficient Federated Learning via Guided Participant Selection
- Authors: Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, Mosharaf Chowdhury
- Abstract summary: Federated Learning (FL) enables in-situ model training and testing on edge data.
Existing efforts randomly select FL participants, which leads to poor model and system efficiency.
Oort improves time-to-accuracy performance by 1.2x-14.1x and final model accuracy by 1.3%-9.8%.
- Score: 5.01181273401802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an emerging direction in distributed machine
learning (ML) that enables in-situ model training and testing on edge data.
Despite having the same end goals as traditional ML, FL executions differ
significantly in scale, spanning thousands to millions of participating
devices. As a result, data characteristics and device capabilities vary widely
across clients. Yet, existing efforts randomly select FL participants, which
leads to poor model and system efficiency.
In this paper, we propose Oort to improve the performance of federated
training and testing with guided participant selection. With an aim to improve
time-to-accuracy performance in model training, Oort prioritizes the use of
those clients who have both data that offers the greatest utility in improving
model accuracy and the capability to run training quickly. To enable FL
developers to interpret their results in model testing, Oort enforces their
requirements on the distribution of participant data while improving the
duration of federated testing by cherry-picking clients. Our evaluation shows
that, compared to existing participant selection mechanisms, Oort improves
time-to-accuracy performance by 1.2x-14.1x and final model accuracy by
1.3%-9.8%, while efficiently enforcing developer-specified model testing
criteria at the scale of millions of clients.
Related papers
- Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning [20.412469498888292]
Federated Learning (FL) enables multiple devices to collaboratively train a shared model.
The selection of participating devices in each training round critically affects both the model performance and training efficiency.
We introduce a novel device selection solution called FedRank, which is an end-to-end, ranking-based approach.
arXiv Detail & Related papers (2024-05-07T08:44:29Z) - GPFL: A Gradient Projection-Based Client Selection Framework for Efficient Federated Learning [6.717563725609496]
Federated learning client selection is crucial for determining participant clients.
We propose GPFL, which measures client value by comparing local and global descent directions.
GPFL exhibits shorter computation times through pre-selection and parameter reuse in federated learning.
arXiv Detail & Related papers (2024-03-26T16:14:43Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Test-Time Robust Personalization for Federated Learning [5.553167334488855]
Federated Learning (FL) is a machine learning paradigm where many clients collaboratively learn a shared global model with decentralized training data.
Personalized FL additionally adapts the global model to different clients, achieving promising results on consistent local training and test distributions.
We propose Federated Test-time Head Ensemble plus tuning(FedTHE+), which personalizes FL models with robustness to various test-time distribution shifts.
arXiv Detail & Related papers (2022-05-22T20:08:14Z) - Sample Selection with Deadline Control for Efficient Federated Learning
on Heterogeneous Clients [8.350621280672891]
Federated Learning (FL) trains a machine learning model on distributed clients without exposing individual data.
We propose FedBalancer, a systematic FL framework that actively selects clients' training samples.
We show that FedBalancer improves the time-to-accuracy performance by 1.224.62x while improving the model accuracy by 1.03.3%.
arXiv Detail & Related papers (2022-01-05T13:35:35Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Stochastic Client Selection for Federated Learning with Volatile Clients [41.591655430723186]
Federated Learning (FL) is a privacy-preserving machine learning paradigm.
In each round of synchronous FL training, only a fraction of available clients are chosen to participate.
We propose E3CS, a client selection scheme to solve the problem.
arXiv Detail & Related papers (2020-11-17T16:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.