FedLP: Layer-wise Pruning Mechanism for Communication-Computation
Efficient Federated Learning
- URL: http://arxiv.org/abs/2303.06360v1
- Date: Sat, 11 Mar 2023 09:57:00 GMT
- Title: FedLP: Layer-wise Pruning Mechanism for Communication-Computation
Efficient Federated Learning
- Authors: Zheqi Zhu, Yuchen Shi, Jiajun Luo, Fei Wang, Chenghui Peng, Pingyi
Fan, Khaled B. Letaief
- Abstract summary: Federated learning (FL) has prevailed as an efficient and privacy-preserved scheme for distributed learning.
We formulate an explicit FL pruning framework, FedLP (Federated Layer-wise Pruning), which is model-agnostic and universal for different types of deep learning models.
- Score: 15.665720478360557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has prevailed as an efficient and privacy-preserved
scheme for distributed learning. In this work, we mainly focus on the
optimization of computation and communication in FL from a view of pruning. By
adopting layer-wise pruning in local training and federated updating, we
formulate an explicit FL pruning framework, FedLP (Federated Layer-wise
Pruning), which is model-agnostic and universal for different types of deep
learning models. Two specific schemes of FedLP are designed for scenarios with
homogeneous local models and heterogeneous ones. Both theoretical and
experimental evaluations are developed to verify that FedLP relieves the system
bottlenecks of communication and computation with marginal performance decay.
To the best of our knowledge, FedLP is the first framework that formally
introduces the layer-wise pruning into FL. Within the scope of federated
learning, more variants and combinations can be further designed based on
FedLP.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Communication Efficient and Privacy-Preserving Federated Learning Based
on Evolution Strategies [0.0]
Federated learning (FL) is an emerging paradigm for training deep neural networks (DNNs) in distributed manners.
In this work, we present a federated learning algorithm based on evolution strategies (FedES), a zeroth-order training method.
arXiv Detail & Related papers (2023-11-05T21:40:46Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - FedIN: Federated Intermediate Layers Learning for Model Heterogeneity [7.781409257429762]
Federated learning (FL) facilitates edge devices to cooperatively train a global shared model while maintaining the training data locally and privately.
In this study, we propose an FL method called Federated Intermediate Layers Learning (FedIN), supporting heterogeneous models without relying on any public dataset.
Experiment results demonstrate the superior performance of FedIN in heterogeneous model environments compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2023-04-03T07:20:43Z) - When Federated Learning Meets Pre-trained Language Models'
Parameter-Efficient Tuning Methods [22.16636947999123]
We introduce various parameter-efficient tuning (PETuning) methods into federated learning.
Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL.
Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters.
arXiv Detail & Related papers (2022-12-20T06:44:32Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Federated Active Learning (F-AL): an Efficient Annotation Strategy for
Federated Learning [8.060606972572451]
Federated learning (FL) has been intensively investigated in terms of communication efficiency, privacy, and fairness.
We propose to apply active learning (AL) and sampling strategy into the FL framework to reduce the annotation workload.
We empirically demonstrate that the F-AL outperforms baseline methods in image classification tasks.
arXiv Detail & Related papers (2022-02-01T03:17:29Z) - Communication-Efficient Consensus Mechanism for Federated Reinforcement
Learning [20.891460617583302]
We show that FL can improve the policy performance of IRL in terms of training efficiency and stability.
To reach a good balance between improving the model's convergence performance and reducing the required communication and computation overheads, this paper proposes a system utility function.
arXiv Detail & Related papers (2022-01-30T04:04:24Z) - FedComm: Federated Learning as a Medium for Covert Communication [56.376997104843355]
Federated Learning (FL) is a solution to mitigate the privacy implications related to the adoption of deep learning.
This paper thoroughly investigates the communication capabilities of an FL scheme.
We introduce FedComm, a novel multi-system covert-communication technique.
arXiv Detail & Related papers (2022-01-21T17:05:56Z) - FedNLP: A Research Platform for Federated Learning in Natural Language
Processing [55.01246123092445]
We present the FedNLP, a research platform for federated learning in NLP.
FedNLP supports various popular task formulations in NLP such as text classification, sequence tagging, question answering, seq2seq generation, and language modeling.
Preliminary experiments with FedNLP reveal that there exists a large performance gap between learning on decentralized and centralized datasets.
arXiv Detail & Related papers (2021-04-18T11:04:49Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.