SplitFed: When Federated Learning Meets Split Learning
- URL: http://arxiv.org/abs/2004.12088v5
- Date: Wed, 16 Feb 2022 22:02:09 GMT
- Title: SplitFed: When Federated Learning Meets Split Learning
- Authors: Chandra Thapa, M.A.P. Chamikara, Seyit Camtepe, Lichao Sun
- Abstract summary: Federated learning (FL) and split learning (SL) are two popular distributed machine learning approaches.
This paper presents a novel approach, named splitfed learning (SFL), that amalgamates the two approaches.
SFL provides similar test accuracy and communication efficiency as SL while significantly decreasing its computation time per global epoch than in SL for multiple clients.
- Score: 16.212941272007285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) and split learning (SL) are two popular distributed
machine learning approaches. Both follow a model-to-data scenario; clients
train and test machine learning models without sharing raw data. SL provides
better model privacy than FL due to the machine learning model architecture
split between clients and the server. Moreover, the split model makes SL a
better option for resource-constrained environments. However, SL performs
slower than FL due to the relay-based training across multiple clients. In this
regard, this paper presents a novel approach, named splitfed learning (SFL),
that amalgamates the two approaches eliminating their inherent drawbacks, along
with a refined architectural configuration incorporating differential privacy
and PixelDP to enhance data privacy and model robustness. Our analysis and
empirical results demonstrate that (pure) SFL provides similar test accuracy
and communication efficiency as SL while significantly decreasing its
computation time per global epoch than in SL for multiple clients. Furthermore,
as in SL, its communication efficiency over FL improves with the number of
clients. Besides, the performance of SFL with privacy and robustness measures
is further evaluated under extended experimental settings.
Related papers
- HierSFL: Local Differential Privacy-aided Split Federated Learning in
Mobile Edge Computing [7.180235086275924]
Federated Learning is a promising approach for learning from user data while preserving data privacy.
Split Federated Learning is utilized, where clients upload their intermediate model training outcomes to a cloud server for collaborative server-client model training.
This methodology facilitates resource-constrained clients' participation in model training but also increases the training time and communication overhead.
We propose a novel algorithm, called Hierarchical Split Federated Learning (HierSFL), that amalgamates models at the edge and cloud phases.
arXiv Detail & Related papers (2024-01-16T09:34:10Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - PFSL: Personalized & Fair Split Learning with Data & Label Privacy for
thin clients [0.5144809478361603]
PFSL is a new framework of distributed split learning where a large number of thin clients perform transfer learning in parallel.
We implement a lightweight step of personalization of client models to provide high performance for their respective data distributions.
Our accuracy far exceeds that of current algorithms SL and is very close to that of centralized learning on several real-life benchmarks.
arXiv Detail & Related papers (2023-03-19T10:38:29Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Server-Side Local Gradient Averaging and Learning Rate Acceleration for
Scalable Split Learning [82.06357027523262]
Federated learning (FL) and split learning (SL) are two spearheads possessing their pros and cons, and are suited for many user clients and large models.
In this work, we first identify the fundamental bottlenecks of SL, and thereby propose a scalable SL framework, coined SGLR.
arXiv Detail & Related papers (2021-12-11T08:33:25Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z) - Evaluation and Optimization of Distributed Machine Learning Techniques
for Internet of Things [34.544836653715244]
Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques.
Recent FL and SL are combined to form splitfed learning (SFL) to leverage each of their benefits.
This work considers FL, SL, and SFL, and mount them on Raspberry Pi devices to evaluate their performance.
arXiv Detail & Related papers (2021-03-03T23:55:37Z) - Advancements of federated learning towards privacy preservation: from
federated learning to split learning [1.3700362496838854]
In distributed collaborative machine learning (DCML) paradigm, federated learning (FL) recently attracted much attention due to its applications in health, finance, and the latest innovations such as industry 4.0 and smart vehicles.
In practical scenarios, all clients do not have sufficient computing resources (e.g., Internet of Things), the machine learning model has millions of parameters, and its privacy between the server and the clients is a prime concern.
Recently, a hybrid of FL and SL, called splitfed learning, is introduced to elevate the benefits of both FL (faster training/testing time) and SL (model split and
arXiv Detail & Related papers (2020-11-25T05:01:33Z) - Ensemble Distillation for Robust Model Fusion in Federated Learning [72.61259487233214]
Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model.
In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side.
We propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients.
arXiv Detail & Related papers (2020-06-12T14:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.