SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split
Learning
- URL: http://arxiv.org/abs/2108.09052v2
- Date: Mon, 23 Aug 2021 05:28:50 GMT
- Title: SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split
Learning
- Authors: Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek
- Abstract summary: Split learning involves dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest.
Such training-hijacking attacks present a significant risk for the data privacy of split learning clients.
We propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed deep learning frameworks, such as split learning, have recently
been proposed to enable a group of participants to collaboratively train a deep
neural network without sharing their raw data. Split learning in particular
achieves this goal by dividing a neural network between a client and a server
so that the client computes the initial set of layers, and the server computes
the rest. However, this method introduces a unique attack vector for a
malicious server attempting to steal the client's private data: the server can
direct the client model towards learning a task of its choice. With a concrete
example already proposed, such training-hijacking attacks present a significant
risk for the data privacy of split learning clients.
In this paper, we propose SplitGuard, a method by which a split learning
client can detect whether it is being targeted by a training-hijacking attack
or not. We experimentally evaluate its effectiveness, and discuss in detail
various points related to its use. We conclude that SplitGuard can effectively
detect training-hijacking attacks while minimizing the amount of information
recovered by the adversaries.
Related papers
- Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - SplitOut: Out-of-the-Box Training-Hijacking Detection in Split Learning via Outlier Detection [0.0]
Split learning enables efficient and privacy-aware training of a deep neural network by splitting a neural network so that the clients (data holders) compute the first layers and only share the intermediate output with the central compute-heavy server.
Server has full control over what the client models learn, which has already been exploited to infer the private data of clients and to implement backdoors in the client models.
We show that given modest assumptions regarding the clients' compute capabilities, an out-of-the-box detection method can be used to detect existing training-hijacking attacks with almost-zero false positive rates.
arXiv Detail & Related papers (2023-02-16T23:02:39Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning [0.0]
Split learning framework aims to split up the model among the client and the server.
We show that split learning paradigm can pose serious security risks and provide no more than a false sense of security.
arXiv Detail & Related papers (2021-08-20T07:39:16Z) - Vulnerability Due to Training Order in Split Learning [0.0]
In split learning, an additional privacy-preserving algorithm called no-peek algorithm can be incorporated, which is robust to adversarial attacks.
We show that the model trained using the data of all clients does not perform well on the client's data which was considered earliest in a round for training the model.
We also demonstrate that the SplitFedv3 algorithm mitigates this problem while still leveraging the privacy benefits provided by split learning.
arXiv Detail & Related papers (2021-03-26T06:30:54Z) - Unleashing the Tiger: Inference Attacks on Split Learning [2.492607582091531]
We introduce general attack strategies targeting the reconstruction of clients' private training sets.
A malicious server can actively hijack the learning process of the distributed model.
We demonstrate our attack is able to overcome recently proposed defensive techniques.
arXiv Detail & Related papers (2020-12-04T15:41:00Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.