FedCliP: Federated Learning with Client Pruning
- URL: http://arxiv.org/abs/2301.06768v1
- Date: Tue, 17 Jan 2023 09:15:37 GMT
- Title: FedCliP: Federated Learning with Client Pruning
- Authors: Beibei Li, Zerui Shao, Ao Liu, Peiran Wang
- Abstract summary: Federated learning (FL) is a newly emerging distributed learning paradigm.
One fundamental bottleneck in FL is the heavy communication overheads between the distributed clients and the central server.
We propose FedCliP, the first communication efficient FL training framework from a macro perspective.
- Score: 3.796320380104124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a newly emerging distributed learning paradigm
that allows numerous participating clients to train machine learning models
collaboratively, each with its data distribution and without sharing their
data. One fundamental bottleneck in FL is the heavy communication overheads of
high-dimensional models between the distributed clients and the central server.
Previous works often condense models into compact formats by gradient
compression or distillation to overcome communication limitations. In contrast,
we propose FedCliP in this work, the first communication efficient FL training
framework from a macro perspective, which can position valid clients
participating in FL quickly and constantly prune redundant clients.
Specifically, We first calculate the reliability score based on the training
loss and model divergence as an indicator to measure the client pruning. We
propose a valid client determination approximation framework based on the
reliability score with Gaussian Scale Mixture (GSM) modeling for federated
participating clients pruning. Besides, we develop a communication efficient
client pruning training method in the FL scenario. Experimental results on
MNIST dataset show that FedCliP has up to 10%~70% communication costs for
converged models at only a 0.2% loss in accuracy.
Related papers
- Towards Client Driven Federated Learning [7.528642177161784]
We introduce Client-Driven Federated Learning (CDFL), a novel FL framework that puts clients at the driving role.
In CDFL, each client independently and asynchronously updates its model by uploading the locally trained model to the server and receiving a customized model tailored to its local task.
arXiv Detail & Related papers (2024-05-24T10:17:49Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedClust: Optimizing Federated Learning on Non-IID Data through
Weight-Driven Client Clustering [28.057411252785176]
Federated learning (FL) is an emerging distributed machine learning paradigm enabling collaborative model training on decentralized devices without exposing their local data.
This paper proposes FedClust, a novel CFL approach leveraging correlations between local model weights and client data distributions.
arXiv Detail & Related papers (2024-03-07T01:50:36Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Federated Learning from Pre-Trained Models: A Contrastive Learning
Approach [43.893267526525904]
Federated Learning (FL) is a machine learning paradigm that allows decentralized clients to learn collaboratively without sharing their private data.
Excessive computation and communication demands pose challenges to current FL frameworks.
We propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models.
arXiv Detail & Related papers (2022-09-21T03:16:57Z) - Unifying Distillation with Personalization in Federated Learning [1.8262547855491458]
Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data.
In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients.
In this paper, we address this problem with PersFL, a two-stage personalized learning algorithm.
In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from
arXiv Detail & Related papers (2021-05-31T17:54:29Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with Moreau Envelopes [16.25105865597947]
Federated learning (FL) is a decentralized and privacy-preserving machine learning technique.
One challenge associated with FL is statistical diversity among clients.
We propose an algorithm for personalized FL (FedFedMe) using envelopes regularized loss function.
arXiv Detail & Related papers (2020-06-16T00:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.