Secure Split Learning against Property Inference, Data Reconstruction,
and Feature Space Hijacking Attacks
- URL: http://arxiv.org/abs/2304.09515v1
- Date: Wed, 19 Apr 2023 09:08:23 GMT
- Title: Secure Split Learning against Property Inference, Data Reconstruction,
and Feature Space Hijacking Attacks
- Authors: Yunlong Mao, Zexi Xin, Zhenyu Li, Jue Hong, Qingyou Yang, Sheng Zhong
- Abstract summary: Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of a guest and a host.
SplitNN creates a new attack surface for the adversarial participant, holding back its practical use in the real world.
This paper investigates the adversarial effects of highly threatening attacks, including property inference, data reconstruction, and feature hijacking attacks.
We propose a new activation function named R3eLU, transferring private smashed data and partial loss into randomized responses.
- Score: 5.209316363034367
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Split learning of deep neural networks (SplitNN) has provided a promising
solution to learning jointly for the mutual interest of a guest and a host,
which may come from different backgrounds, holding features partitioned
vertically. However, SplitNN creates a new attack surface for the adversarial
participant, holding back its practical use in the real world. By investigating
the adversarial effects of highly threatening attacks, including property
inference, data reconstruction, and feature hijacking attacks, we identify the
underlying vulnerability of SplitNN and propose a countermeasure. To prevent
potential threats and ensure the learning guarantees of SplitNN, we design a
privacy-preserving tunnel for information exchange between the guest and the
host. The intuition is to perturb the propagation of knowledge in each
direction with a controllable unified solution. To this end, we propose a new
activation function named R3eLU, transferring private smashed data and partial
loss into randomized responses in forward and backward propagations,
respectively. We give the first attempt to secure split learning against three
threatening attacks and present a fine-grained privacy budget allocation
scheme. The analysis proves that our privacy-preserving SplitNN solution
provides a tight privacy budget, while the experimental results show that our
solution performs better than existing solutions in most cases and achieves a
good tradeoff between defense and model usability.
Related papers
- SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning (Full Version) [53.16528046390881]
Split Learning (SL) is a distributed deep learning approach enabling multiple clients and a server to collaboratively train and infer on a shared deep neural network (DNN)
This paper presents SafeSplit, the first defense against client-side backdoor attacks in Split Learning (SL)
It uses a two-fold analysis to identify client-induced changes and detect poisoned models.
arXiv Detail & Related papers (2025-01-11T22:20:20Z) - Edge-Only Universal Adversarial Attacks in Distributed Learning [49.546479320670464]
In this work, we explore the feasibility of generating universal adversarial attacks when an attacker has access to the edge part of the model only.
Our approach shows that adversaries can induce effective mispredictions in the unknown cloud part by leveraging key features on the edge side.
Our results on ImageNet demonstrate strong attack transferability to the unknown cloud part.
arXiv Detail & Related papers (2024-11-15T11:06:24Z) - Investigating Privacy Attacks in the Gray-Box Setting to Enhance Collaborative Learning Schemes [7.651569149118461]
We study privacy attacks in the gray-box setting, where the attacker has only limited access to the model.
We deploy SmartNNCrypt, a framework that tailors homomorphic encryption to protect the portions of the model posing higher privacy risks.
arXiv Detail & Related papers (2024-09-25T18:49:21Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Turning Privacy-preserving Mechanisms against Federated Learning [22.88443008209519]
We design an attack capable of deceiving state-of-the-art defenses for federated learning.
The proposed attack includes two operating modes, the first one focusing on convergence inhibition (Adversarial Mode) and the second one aiming at building a deceptive rating injection on the global federated model (Backdoor Mode)
The experimental results show the effectiveness of our attack in both its modes, returning on average 60% performance detriment in all the tests on Adversarial Mode and fully effective backdoors in 93% of cases for the tests performed on Backdoor Mode.
arXiv Detail & Related papers (2023-05-09T11:43:31Z) - Feature Space Hijacking Attacks against Differentially Private Split
Learning [0.0]
Split learning and differential privacy are technologies with growing potential to help with privacy-compliant advanced analytics on distributed datasets.
This work is applying a recent feature space hijacking attack (FSHA) to the learning process of a split neural network enhanced with differential privacy (DP), using a client-side off-the-shelf DP.
The FSHA attack obtains client's private data reconstruction with low error rates at arbitrarily set DP epsilon levels.
arXiv Detail & Related papers (2022-01-11T16:06:18Z) - Privacy-Preserving Federated Learning on Partitioned Attributes [6.661716208346423]
Federated learning empowers collaborative training without exposing local data or models.
We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations.
To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm.
arXiv Detail & Related papers (2021-04-29T14:49:14Z) - Practical Defences Against Model Inversion Attacks for Split Neural
Networks [5.66430335973956]
We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server.
We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST.
arXiv Detail & Related papers (2021-04-12T18:12:17Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.