UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning
- URL: http://arxiv.org/abs/2108.09033v1
- Date: Fri, 20 Aug 2021 07:39:16 GMT
- Title: UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning
- Authors: Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek
- Abstract summary: Split learning framework aims to split up the model among the client and the server.
We show that split learning paradigm can pose serious security risks and provide no more than a false sense of security.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training deep neural networks requires large scale data, which often forces
users to work in a distributed or outsourced setting, accompanied with privacy
concerns. Split learning framework aims to address this concern by splitting up
the model among the client and the server. The idea is that since the server
does not have access to client's part of the model, the scheme supposedly
provides privacy. We show that this is not true via two novel attacks. (1) We
show that an honest-but-curious split learning server, equipped only with the
knowledge of the client neural network architecture, can recover the input
samples and also obtain a functionally similar model to the client model,
without the client being able to detect the attack. (2) Furthermore, we show
that if split learning is used naively to protect the training labels, the
honest-but-curious server can infer the labels with perfect accuracy. We test
our attacks using three benchmark datasets and investigate various properties
of the overall system that affect the attacks' effectiveness. Our results show
that plaintext split learning paradigm can pose serious security risks and
provide no more than a false sense of security.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - SplitOut: Out-of-the-Box Training-Hijacking Detection in Split Learning via Outlier Detection [0.0]
Split learning enables efficient and privacy-aware training of a deep neural network by splitting a neural network so that the clients (data holders) compute the first layers and only share the intermediate output with the central compute-heavy server.
Server has full control over what the client models learn, which has already been exploited to infer the private data of clients and to implement backdoors in the client models.
We show that given modest assumptions regarding the clients' compute capabilities, an out-of-the-box detection method can be used to detect existing training-hijacking attacks with almost-zero false positive rates.
arXiv Detail & Related papers (2023-02-16T23:02:39Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split
Learning [0.0]
Split learning involves dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest.
Such training-hijacking attacks present a significant risk for the data privacy of split learning clients.
We propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not.
arXiv Detail & Related papers (2021-08-20T08:29:22Z) - Fidel: Reconstructing Private Training Samples from Weight Updates in
Federated Learning [0.0]
We evaluate a novel attack method within regular federated learning which we name the First Dense Layer Attack (Fidel)
We show how to recover on average twenty out of thirty private data samples from a client's model update employing a fully connected neural network.
arXiv Detail & Related papers (2021-01-01T04:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.