Can We Use Split Learning on 1D CNN Models for Privacy Preserving
Training?
- URL: http://arxiv.org/abs/2003.12365v1
- Date: Mon, 16 Mar 2020 06:06:14 GMT
- Title: Can We Use Split Learning on 1D CNN Models for Privacy Preserving
Training?
- Authors: Sharif Abuadbba, Kyuyeon Kim, Minki Kim, Chandra Thapa, Seyit A.
Camtepe, Yansong Gao, Hyoungshick Kim, Surya Nepal
- Abstract summary: A new collaborative learning, called split learning, was recently introduced, aiming to protect user data privacy without revealing raw input data to a server.
This paper examines whether split learning can be used to perform privacy-preserving training for 1D convolutional neural network (CNN) models.
- Score: 31.618237059436346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new collaborative learning, called split learning, was recently introduced,
aiming to protect user data privacy without revealing raw input data to a
server. It collaboratively runs a deep neural network model where the model is
split into two parts, one for the client and the other for the server.
Therefore, the server has no direct access to raw data processed at the client.
Until now, the split learning is believed to be a promising approach to protect
the client's raw data; for example, the client's data was protected in
healthcare image applications using 2D convolutional neural network (CNN)
models. However, it is still unclear whether the split learning can be applied
to other deep learning models, in particular, 1D CNN.
In this paper, we examine whether split learning can be used to perform
privacy-preserving training for 1D CNN models. To answer this, we first design
and implement an 1D CNN model under split learning and validate its efficacy in
detecting heart abnormalities using medical ECG data. We observed that the 1D
CNN model under split learning can achieve the same accuracy of 98.9\% like the
original (non-split) model. However, our evaluation demonstrates that split
learning may fail to protect the raw data privacy on 1D CNN models. To address
the observed privacy leakage in split learning, we adopt two privacy leakage
mitigation techniques: 1) adding more hidden layers to the client side and 2)
applying differential privacy. Although those mitigation techniques are helpful
in reducing privacy leakage, they have a significant impact on model accuracy.
Hence, based on those results, we conclude that split learning alone would not
be sufficient to maintain the confidentiality of raw sequential data in 1D CNN
models.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Love or Hate? Share or Split? Privacy-Preserving Training Using Split
Learning and Homomorphic Encryption [47.86010265348072]
Split learning (SL) is a new collaborative learning technique that allows participants to train machine learning models without the client sharing raw data.
Previous works demonstrated that reconstructing activation maps could result in privacy leakage of client data.
In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data.
arXiv Detail & Related papers (2023-09-19T10:56:08Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Split Ways: Privacy-Preserving Training of Encrypted Data Using Split
Learning [6.916134299626706]
Split Learning (SL) is a new collaborative learning technique that allows participants to train machine learning models without the client sharing raw data.
Previous works demonstrated that reconstructing activation maps could result in privacy leakage of client data.
In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data.
arXiv Detail & Related papers (2023-01-20T19:26:51Z) - Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors [41.45649235969172]
Self-ensemble protection (SEP) is proposed to prevent training good models on the data.
SEP is verified to be a new state-of-the-art, e.g., our small perturbations reduce the accuracy of a CIFAR-10 ResNet18 from 94.56% to 14.68%, compared to 41.35% by the best-known method.
arXiv Detail & Related papers (2022-11-22T04:54:20Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning [0.0]
Split learning framework aims to split up the model among the client and the server.
We show that split learning paradigm can pose serious security risks and provide no more than a false sense of security.
arXiv Detail & Related papers (2021-08-20T07:39:16Z) - Vulnerability Due to Training Order in Split Learning [0.0]
In split learning, an additional privacy-preserving algorithm called no-peek algorithm can be incorporated, which is robust to adversarial attacks.
We show that the model trained using the data of all clients does not perform well on the client's data which was considered earliest in a round for training the model.
We also demonstrate that the SplitFedv3 algorithm mitigates this problem while still leveraging the privacy benefits provided by split learning.
arXiv Detail & Related papers (2021-03-26T06:30:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.