SplitFedZip: Learned Compression for Data Transfer Reduction in Split-Federated Learning
- URL: http://arxiv.org/abs/2412.17150v1
- Date: Wed, 18 Dec 2024 19:04:19 GMT
- Title: SplitFedZip: Learned Compression for Data Transfer Reduction in Split-Federated Learning
- Authors: Chamani Shiranthika, Hadi Hadizadeh, Parvaneh Saeedi, Ivan V. Bajić,
- Abstract summary: SplitFederated (SplitFed) learning is an ideal learning framework across various domains.
SplitFedZip is a novel method that employs learned compression to reduce data transfer in SplitFed learning.
- Score: 5.437298646956505
- License:
- Abstract: Federated Learning (FL) enables multiple clients to train a collaborative model without sharing their local data. Split Learning (SL) allows a model to be trained in a split manner across different locations. Split-Federated (SplitFed) learning is a more recent approach that combines the strengths of FL and SL. SplitFed minimizes the computational burden of FL by balancing computation across clients and servers, while still preserving data privacy. This makes it an ideal learning framework across various domains, especially in healthcare, where data privacy is of utmost importance. However, SplitFed networks encounter numerous communication challenges, such as latency, bandwidth constraints, synchronization overhead, and a large amount of data that needs to be transferred during the learning process. In this paper, we propose SplitFedZip -- a novel method that employs learned compression to reduce data transfer in SplitFed learning. Through experiments on medical image segmentation, we show that learned compression can provide a significant data communication reduction in SplitFed learning, while maintaining the accuracy of the final trained model. The implementation is available at: \url{https://github.com/ChamaniS/SplitFedZip}.
Related papers
- Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning [56.21666819468249]
Resource constraints of clients and communication costs pose major problems for training large models in Federated Learning.
We introduce Sparse-ProxSkip, which combines training and acceleration in a sparse setting.
We demonstrate the good performance of Sparse-ProxSkip in extensive experiments.
arXiv Detail & Related papers (2024-05-31T05:21:12Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Visual Transformer Meets CutMix for Improved Accuracy, Communication
Efficiency, and Data Privacy in Split Learning [47.266470238551314]
This article seeks for a distributed learning solution for the visual transformer (ViT) architectures.
ViTs often have larger model sizes, and are computationally expensive, making federated learning (FL) ill-suited.
We propose a new form of CutSmashed data by randomly punching and compressing the original smashed data.
We develop a novel SL framework for ViT, coined CutMixSL, communicating CutSmashed data.
arXiv Detail & Related papers (2022-07-01T07:00:30Z) - Mixed Federated Learning: Joint Decentralized and Centralized Learning [10.359026922702142]
Federated learning (FL) enables learning from decentralized privacy-sensitive data.
This paper introduces mixed FL, which incorporates an additional loss term calculated at the coordinating server.
arXiv Detail & Related papers (2022-05-26T22:22:15Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Differentially Private Label Protection in Split Learning [20.691549091238965]
Split learning is a distributed training framework that allows multiple parties to jointly train a machine learning model over partitioned data.
Recent works showed that the implementation of split learning suffers from severe privacy risks that a semi-honest adversary can easily reconstruct labels.
We propose textsfTPSL (Transcript Private Split Learning), a generic gradient based split learning framework that provides provable differential privacy guarantee.
arXiv Detail & Related papers (2022-03-04T00:35:03Z) - Server-Side Local Gradient Averaging and Learning Rate Acceleration for
Scalable Split Learning [82.06357027523262]
Federated learning (FL) and split learning (SL) are two spearheads possessing their pros and cons, and are suited for many user clients and large models.
In this work, we first identify the fundamental bottlenecks of SL, and thereby propose a scalable SL framework, coined SGLR.
arXiv Detail & Related papers (2021-12-11T08:33:25Z) - Vulnerability Due to Training Order in Split Learning [0.0]
In split learning, an additional privacy-preserving algorithm called no-peek algorithm can be incorporated, which is robust to adversarial attacks.
We show that the model trained using the data of all clients does not perform well on the client's data which was considered earliest in a round for training the model.
We also demonstrate that the SplitFedv3 algorithm mitigates this problem while still leveraging the privacy benefits provided by split learning.
arXiv Detail & Related papers (2021-03-26T06:30:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.