Defending Label Inference Attacks in Split Learning under Regression
Setting
- URL: http://arxiv.org/abs/2308.09448v1
- Date: Fri, 18 Aug 2023 10:22:31 GMT
- Title: Defending Label Inference Attacks in Split Learning under Regression
Setting
- Authors: Haoze Qiu, Fei Zheng, Chaochao Chen, Xiaolin Zheng
- Abstract summary: Split Learning is a privacy-preserving method for implementing Vertical Federated Learning.
In this paper, we focus on label inference attacks in Split Learning under regression setting.
We propose Random Label Extension (RLE), where labels are extended to obfuscate the label information contained in the gradients.
To further minimize the impact on the original task, we propose Model-based adaptive Label Extension (MLE), where original labels are preserved in the extended labels and dominate the training process.
- Score: 20.77178463903939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a privacy-preserving method for implementing Vertical Federated Learning,
Split Learning has been extensively researched. However, numerous studies have
indicated that the privacy-preserving capability of Split Learning is
insufficient. In this paper, we primarily focus on label inference attacks in
Split Learning under regression setting, which are mainly implemented through
the gradient inversion method. To defend against label inference attacks, we
propose Random Label Extension (RLE), where labels are extended to obfuscate
the label information contained in the gradients, thereby preventing the
attacker from utilizing gradients to train an attack model that can infer the
original labels. To further minimize the impact on the original task, we
propose Model-based adaptive Label Extension (MLE), where original labels are
preserved in the extended labels and dominate the training process. The
experimental results show that compared to the basic defense methods, our
proposed defense methods can significantly reduce the attack model's
performance while preserving the original task's performance.
Related papers
- Training on Fake Labels: Mitigating Label Leakage in Split Learning via Secure Dimension Transformation [10.404379188947383]
Two-party split learning has been proven to survive label inference attacks.
We propose a novel two-party split learning method to defend against existing label inference attacks.
arXiv Detail & Related papers (2024-10-11T09:25:21Z) - LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation [10.224977496821154]
Split Neural Network is popular in industry due to its privacy-preserving characteristics.
malicious participants may still infer label information from the uploaded embeddings, leading to privacy leakage.
We propose a new label obfuscation defense strategy, called LabObf', which randomly maps each original integer-valued label to multiple real-valued soft labels.
arXiv Detail & Related papers (2024-05-27T10:54:42Z) - Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks [88.12362924175741]
Gradient inversion attacks aim to reconstruct local training data from intermediate gradients exposed in the federated learning framework.
Previous methods, starting from reconstructing a single data point and then relaxing the single-image limit to batch level, are only tested under hard label constraints.
We are the first to initiate a novel algorithm to simultaneously recover the ground-truth augmented label and the input feature of the last fully-connected layer from single-input gradients.
arXiv Detail & Related papers (2024-02-05T15:51:34Z) - Partial-Label Regression [54.74984751371617]
Partial-label learning is a weakly supervised learning setting that allows each training example to be annotated with a set of candidate labels.
Previous studies on partial-label learning only focused on the classification setting where candidate labels are all discrete.
In this paper, we provide the first attempt to investigate partial-label regression, where each training example is annotated with a set of real-valued candidate labels.
arXiv Detail & Related papers (2023-06-15T09:02:24Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Label Inference Attack against Split Learning under Regression Setting [24.287752556622312]
We study the leakage in the scenario of the regression model, where the private labels are continuous numbers.
We propose a novel learning-based attack that integrates gradient information and extra learning regularization objectives.
arXiv Detail & Related papers (2023-01-18T03:17:24Z) - Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly
Supervised Video Anomaly Detection [149.23913018423022]
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels.
Two-stage self-training methods have achieved significant improvements by self-generating pseudo labels.
We propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training.
arXiv Detail & Related papers (2022-12-08T05:53:53Z) - Protecting Split Learning by Potential Energy Loss [70.81375125791979]
We focus on the privacy leakage from the forward embeddings of split learning.
We propose the potential energy loss to make the forward embeddings become more 'complicated'
arXiv Detail & Related papers (2022-10-18T06:21:11Z) - Similarity-based Label Inference Attack against Training and Inference of Split Learning [13.104547182351332]
Split learning is a promising paradigm for privacy-preserving distributed learning.
This paper shows that the exchanged intermediate results, including smashed data, can already reveal the private labels.
We propose three label inference attacks to efficiently recover the private labels during both the training and inference phases.
arXiv Detail & Related papers (2022-03-10T08:02:03Z) - Two-phase Pseudo Label Densification for Self-training based Domain
Adaptation [93.03265290594278]
We propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD.
In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images.
In the second phase, we perform a confidence-based easy-hard classification.
To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss.
arXiv Detail & Related papers (2020-12-09T02:35:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.