Communication-Efficient and Personalized Federated Lottery Ticket
Learning
- URL: http://arxiv.org/abs/2104.12501v1
- Date: Mon, 26 Apr 2021 12:01:41 GMT
- Title: Communication-Efficient and Personalized Federated Lottery Ticket
Learning
- Authors: Sejin Seo, Seung-Woo Ko, Jihong Park, Seong-Lyun Kim, and Mehdi Bennis
- Abstract summary: Lottery ticket hypothesis claims that a deep neural network (i.e., ground network) contains a number ofworks (i.e., winning tickets)
We propose a personalized and communication-efficient federated lottery ticket learning algorithm, coined CELL, which exploits downlink broadcast for communication efficiency.
- Score: 44.593986790651805
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The lottery ticket hypothesis (LTH) claims that a deep neural network (i.e.,
ground network) contains a number of subnetworks (i.e., winning tickets), each
of which exhibiting identically accurate inference capability as that of the
ground network. Federated learning (FL) has recently been applied in LotteryFL
to discover such winning tickets in a distributed way, showing higher accuracy
multi-task learning than Vanilla FL. Nonetheless, LotteryFL relies on unicast
transmission on the downlink, and ignores mitigating stragglers, questioning
scalability. Motivated by this, in this article we propose a personalized and
communication-efficient federated lottery ticket learning algorithm, coined
CELL, which exploits downlink broadcast for communication efficiency.
Furthermore, it utilizes a novel user grouping method, thereby alternating
between FL and lottery learning to mitigate stragglers. Numerical simulations
validate that CELL achieves up to 3.6% higher personalized task classification
accuracy with 4.3x smaller total communication cost until convergence under the
CIFAR-10 dataset.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - FedDec: Peer-to-peer Aided Federated Learning [15.952956981784219]
Federated learning (FL) has enabled training machine learning models exploiting the data of multiple agents without compromising privacy.
FL is known to be vulnerable to data heterogeneity, partial device participation, and infrequent communication with the server.
We present FedDec, an algorithm that interleaves peer-to-peer communication and parameter averaging between the local gradient updates of FL.
arXiv Detail & Related papers (2023-06-11T16:30:57Z) - On the Design of Communication-Efficient Federated Learning for Health
Monitoring [21.433739206682404]
We propose a communication-efficient federated learning (CEFL) framework that involves clients clustering and transfer learning.
CEFL can save up to 98.45% in communication costs while conceding less than 3% in accuracy loss, when compared to the conventional FL.
arXiv Detail & Related papers (2022-11-30T12:52:23Z) - OFedQIT: Communication-Efficient Online Federated Learning via
Quantization and Intermittent Transmission [7.6058140480517356]
Online federated learning (OFL) is a promising framework to collaboratively learn a sequence of non-linear functions (or models) from distributed streaming data.
We propose a communication-efficient OFL algorithm (named OFedQIT) by means of a quantization and an intermittent transmission.
Our analysis reveals that OFedQIT successfully addresses the drawbacks of OFedAvg while maintaining superior learning accuracy.
arXiv Detail & Related papers (2022-05-13T07:46:43Z) - Dual Lottery Ticket Hypothesis [71.95937879869334]
Lottery Ticket Hypothesis (LTH) provides a novel view to investigate sparse network training and maintain its capacity.
In this work, we regard the winning ticket from LTH as the subnetwork which is in trainable condition and its performance as our benchmark.
We propose a simple sparse network training strategy, Random Sparse Network Transformation (RST), to substantiate our DLTH.
arXiv Detail & Related papers (2022-03-08T18:06:26Z) - The Elastic Lottery Ticket Hypothesis [106.79387235014379]
Lottery Ticket Hypothesis raises keen attention to identifying sparse trainableworks or winning tickets.
The most effective method to identify such winning tickets is still Iterative Magnitude-based Pruning.
We propose a variety of strategies to tweak the winning tickets found from different networks of the same model family.
arXiv Detail & Related papers (2021-03-30T17:53:45Z) - LotteryFL: Personalized and Communication-Efficient Federated Learning
with Lottery Ticket Hypothesis on Non-IID Datasets [52.60094373289771]
Federated learning is a popular distributed machine learning paradigm with enhanced privacy.
We propose LotteryFL -- a personalized and communication-efficient federated learning framework.
We show that LotteryFL significantly outperforms existing solutions in terms of personalization and communication cost.
arXiv Detail & Related papers (2020-08-07T20:45:12Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.