LotteryFL: Personalized and Communication-Efficient Federated Learning
with Lottery Ticket Hypothesis on Non-IID Datasets
- URL: http://arxiv.org/abs/2008.03371v1
- Date: Fri, 7 Aug 2020 20:45:12 GMT
- Title: LotteryFL: Personalized and Communication-Efficient Federated Learning
with Lottery Ticket Hypothesis on Non-IID Datasets
- Authors: Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen,
Hai Li
- Abstract summary: Federated learning is a popular distributed machine learning paradigm with enhanced privacy.
We propose LotteryFL -- a personalized and communication-efficient federated learning framework.
We show that LotteryFL significantly outperforms existing solutions in terms of personalization and communication cost.
- Score: 52.60094373289771
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a popular distributed machine learning paradigm with
enhanced privacy. Its primary goal is learning a global model that offers good
performance for the participants as many as possible. The technology is rapidly
advancing with many unsolved challenges, among which statistical heterogeneity
(i.e., non-IID) and communication efficiency are two critical ones that hinder
the development of federated learning. In this work, we propose LotteryFL -- a
personalized and communication-efficient federated learning framework via
exploiting the Lottery Ticket hypothesis. In LotteryFL, each client learns a
lottery ticket network (i.e., a subnetwork of the base model) by applying the
Lottery Ticket hypothesis, and only these lottery networks will be communicated
between the server and clients. Rather than learning a shared global model in
classic federated learning, each client learns a personalized model via
LotteryFL; the communication cost can be significantly reduced due to the
compact size of lottery networks. To support the training and evaluation of our
framework, we construct non-IID datasets based on MNIST, CIFAR-10 and EMNIST by
taking feature distribution skew, label distribution skew and quantity skew
into consideration. Experiments on these non-IID datasets demonstrate that
LotteryFL significantly outperforms existing solutions in terms of
personalization and communication cost.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - SemiSFL: Split Federated Learning on Unlabeled and Non-IID Data [34.49090830845118]
Federated Learning (FL) has emerged to allow multiple clients to collaboratively train machine learning models on their private data at the network edge.
We propose a novel Semi-supervised SFL system, termed SemiSFL, which incorporates clustering regularization to perform SFL with unlabeled and non-IID client data.
Our system provides a 3.8x speed-up in training time, reduces the communication cost by about 70.3% while reaching the target accuracy, and achieves up to 5.8% improvement in accuracy under non-IID scenarios.
arXiv Detail & Related papers (2023-07-29T02:35:37Z) - Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating
Heterogeneous Lightweight Clients in IoT [34.128674870180596]
Federated learning (FL) enables multiple clients to train models collaboratively without sharing local data.
We propose pFedKnow, which generates lightweight personalized client models via neural network pruning techniques to reduce communication cost.
Experiment results on both image and text datasets show that the proposed pFedKnow outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2023-03-05T13:19:10Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Communication-Efficient and Personalized Federated Lottery Ticket
Learning [44.593986790651805]
Lottery ticket hypothesis claims that a deep neural network (i.e., ground network) contains a number ofworks (i.e., winning tickets)
We propose a personalized and communication-efficient federated lottery ticket learning algorithm, coined CELL, which exploits downlink broadcast for communication efficiency.
arXiv Detail & Related papers (2021-04-26T12:01:41Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.