Unleashing the Tiger: Inference Attacks on Split Learning
- URL: http://arxiv.org/abs/2012.02670v3
- Date: Fri, 14 May 2021 19:08:20 GMT
- Title: Unleashing the Tiger: Inference Attacks on Split Learning
- Authors: Dario Pasquini, Giuseppe Ateniese and Massimo Bernaschi
- Abstract summary: We introduce general attack strategies targeting the reconstruction of clients' private training sets.
A malicious server can actively hijack the learning process of the distributed model.
We demonstrate our attack is able to overcome recently proposed defensive techniques.
- Score: 2.492607582091531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the security of Split Learning -- a novel collaborative
machine learning framework that enables peak performance by requiring minimal
resources consumption. In the present paper, we expose vulnerabilities of the
protocol and demonstrate its inherent insecurity by introducing general attack
strategies targeting the reconstruction of clients' private training sets. More
prominently, we show that a malicious server can actively hijack the learning
process of the distributed model and bring it into an insecure state that
enables inference attacks on clients' data. We implement different adaptations
of the attack and test them on various datasets as well as within realistic
threat scenarios. We demonstrate that our attack is able to overcome recently
proposed defensive techniques aimed at enhancing the security of the split
learning protocol. Finally, we also illustrate the protocol's insecurity
against malicious clients by extending previously devised attacks for Federated
Learning. To make our results reproducible, we made our code available at
https://github.com/pasquini-dario/SplitNN_FSHA.
Related papers
- Dullahan: Stealthy Backdoor Attack against Without-Label-Sharing Split Learning [29.842087372804905]
We propose a stealthy backdoor attack strategy tailored to the without-label-sharing split learning architecture.
Our SBAT achieves a higher level of attack stealthiness by refraining from modifying any intermediate parameters during training.
arXiv Detail & Related papers (2024-05-21T13:03:06Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - Practical Defences Against Model Inversion Attacks for Split Neural
Networks [5.66430335973956]
We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server.
We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST.
arXiv Detail & Related papers (2021-04-12T18:12:17Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.