Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance
- URL: http://arxiv.org/abs/2109.09246v1
- Date: Sun, 19 Sep 2021 22:57:23 GMT
- Title: Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance
- Authors: Praveen Joshi, Chandra Thapa, Seyit Camtepe, Mohammed Hasanuzzamana,
Ted Scully and Haithem Afli
- Abstract summary: Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
- Score: 4.689140226545214
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are
three recent developments in distributed machine learning that are gaining
attention due to their ability to preserve the privacy of raw data. Thus, they
are widely applicable in various domains where data is sensitive, such as
large-scale medical image classification, internet-of-medical-things, and
cross-organization phishing email detection. SFL is developed on the confluence
point of FL and SL. It brings the best of FL and SL by providing parallel
client-side machine learning model updates from the FL paradigm and a higher
level of model privacy (while training) by splitting the model between the
clients and server coming from SL. However, SFL has communication and
computation overhead at the client-side due to the requirement of client-side
model synchronization. For the resource-constrained client-side, removal of
such requirements is required to gain efficiency in the learning. In this
regard, this paper studies SFL without client-side model synchronization. The
resulting architecture is known as Multi-head Split Learning. Our empirical
studies considering the ResNet18 model on MNIST data under IID data
distribution among distributed clients find that Multi-head Split Learning is
feasible. Its performance is comparable to the SFL. Moreover, SFL provides only
1%-2% better accuracy than Multi-head Split Learning on the MNIST test set. To
further strengthen our results, we study the Multi-head Split Learning with
various client-side model portions and its impact on the overall performance.
To this end, our results find a minimal impact on the overall performance of
the model.
Related papers
- Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training [21.89214794178211]
In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space.
We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training.
Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods.
arXiv Detail & Related papers (2024-06-21T13:19:29Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - When MiniBatch SGD Meets SplitFed Learning:Convergence Analysis and
Performance Evaluation [9.815046814597238]
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data.
SplitFed learning (SFL) is a recent distributed approach that alleviates computation workload at the client device by splitting the model at a cut layer into two parts.
MiniBatch-SFL incorporates MiniBatch SGD into SFL, where the clients train the client-side model in an FL fashion while the server trains the server-side model.
arXiv Detail & Related papers (2023-08-23T06:51:22Z) - Client Selection for Generalization in Accelerated Federated Learning: A
Multi-Armed Bandit Approach [20.300740276237523]
Federated learning (FL) is an emerging machine learning (ML) paradigm used to train models across multiple nodes (i.e., clients) holding local data sets.
We develop a novel algorithm to achieve this goal, dubbed Bandit Scheduling for FL (BSFL)
arXiv Detail & Related papers (2023-03-18T09:45:58Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Advancements of federated learning towards privacy preservation: from
federated learning to split learning [1.3700362496838854]
In distributed collaborative machine learning (DCML) paradigm, federated learning (FL) recently attracted much attention due to its applications in health, finance, and the latest innovations such as industry 4.0 and smart vehicles.
In practical scenarios, all clients do not have sufficient computing resources (e.g., Internet of Things), the machine learning model has millions of parameters, and its privacy between the server and the clients is a prime concern.
Recently, a hybrid of FL and SL, called splitfed learning, is introduced to elevate the benefits of both FL (faster training/testing time) and SL (model split and
arXiv Detail & Related papers (2020-11-25T05:01:33Z) - Ensemble Distillation for Robust Model Fusion in Federated Learning [72.61259487233214]
Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model.
In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side.
We propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients.
arXiv Detail & Related papers (2020-06-12T14:49:47Z) - SplitFed: When Federated Learning Meets Split Learning [16.212941272007285]
Federated learning (FL) and split learning (SL) are two popular distributed machine learning approaches.
This paper presents a novel approach, named splitfed learning (SFL), that amalgamates the two approaches.
SFL provides similar test accuracy and communication efficiency as SL while significantly decreasing its computation time per global epoch than in SL for multiple clients.
arXiv Detail & Related papers (2020-04-25T08:52:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.