Vulnerability Due to Training Order in Split Learning
- URL: http://arxiv.org/abs/2103.14291v1
- Date: Fri, 26 Mar 2021 06:30:54 GMT
- Title: Vulnerability Due to Training Order in Split Learning
- Authors: Harshit Madaan, Manish Gawali, Viraj Kulkarni, Aniruddha Pant
- Abstract summary: In split learning, an additional privacy-preserving algorithm called no-peek algorithm can be incorporated, which is robust to adversarial attacks.
We show that the model trained using the data of all clients does not perform well on the client's data which was considered earliest in a round for training the model.
We also demonstrate that the SplitFedv3 algorithm mitigates this problem while still leveraging the privacy benefits provided by split learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Split learning (SL) is a privacy-preserving distributed deep learning method
used to train a collaborative model without the need for sharing of patient's
raw data between clients. In split learning, an additional privacy-preserving
algorithm called no-peek algorithm can be incorporated, which is robust to
adversarial attacks. The privacy benefits offered by split learning make it
suitable for practice in the healthcare domain. However, the split learning
algorithm is flawed as the collaborative model is trained sequentially, i.e.,
one client trains after the other. We point out that the model trained using
the split learning algorithm gets biased towards the data of the clients used
for training towards the end of a round. This makes SL algorithms highly
susceptible to the order in which clients are considered for training. We
demonstrate that the model trained using the data of all clients does not
perform well on the client's data which was considered earliest in a round for
training the model. Moreover, we show that this effect becomes more prominent
with the increase in the number of clients. We also demonstrate that the
SplitFedv3 algorithm mitigates this problem while still leveraging the privacy
benefits provided by split learning.
Related papers
- Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Love or Hate? Share or Split? Privacy-Preserving Training Using Split
Learning and Homomorphic Encryption [47.86010265348072]
Split learning (SL) is a new collaborative learning technique that allows participants to train machine learning models without the client sharing raw data.
Previous works demonstrated that reconstructing activation maps could result in privacy leakage of client data.
In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data.
arXiv Detail & Related papers (2023-09-19T10:56:08Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - A More Secure Split: Enhancing the Security of Privacy-Preserving Split Learning [2.853180143237022]
Split learning (SL) is a new collaborative learning technique that allows participants to train machine learning models without the client sharing raw data.
Previous works demonstrated that reconstructing Activation Maps (AMs) could result in privacy leakage of client data.
In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data.
arXiv Detail & Related papers (2023-09-15T18:39:30Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Split Ways: Privacy-Preserving Training of Encrypted Data Using Split
Learning [6.916134299626706]
Split Learning (SL) is a new collaborative learning technique that allows participants to train machine learning models without the client sharing raw data.
Previous works demonstrated that reconstructing activation maps could result in privacy leakage of client data.
In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data.
arXiv Detail & Related papers (2023-01-20T19:26:51Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split
Learning [0.0]
Split learning involves dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest.
Such training-hijacking attacks present a significant risk for the data privacy of split learning clients.
We propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not.
arXiv Detail & Related papers (2021-08-20T08:29:22Z) - Can We Use Split Learning on 1D CNN Models for Privacy Preserving
Training? [31.618237059436346]
A new collaborative learning, called split learning, was recently introduced, aiming to protect user data privacy without revealing raw input data to a server.
This paper examines whether split learning can be used to perform privacy-preserving training for 1D convolutional neural network (CNN) models.
arXiv Detail & Related papers (2020-03-16T06:06:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.