Label Inference Attack against Split Learning under Regression Setting
- URL: http://arxiv.org/abs/2301.07284v2
- Date: Fri, 7 Apr 2023 05:47:39 GMT
- Title: Label Inference Attack against Split Learning under Regression Setting
- Authors: Shangyu Xie, Xin Yang, Yuanshun Yao, Tianyi Liu, Taiqing Wang and
Jiankai Sun
- Abstract summary: We study the leakage in the scenario of the regression model, where the private labels are continuous numbers.
We propose a novel learning-based attack that integrates gradient information and extra learning regularization objectives.
- Score: 24.287752556622312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a crucial building block in vertical Federated Learning (vFL), Split
Learning (SL) has demonstrated its practice in the two-party model training
collaboration, where one party holds the features of data samples and another
party holds the corresponding labels. Such method is claimed to be private
considering the shared information is only the embedding vectors and gradients
instead of private raw data and labels. However, some recent works have shown
that the private labels could be leaked by the gradients. These existing attack
only works under the classification setting where the private labels are
discrete. In this work, we step further to study the leakage in the scenario of
the regression model, where the private labels are continuous numbers (instead
of discrete labels in classification). This makes previous attacks harder to
infer the continuous labels due to the unbounded output range. To address the
limitation, we propose a novel learning-based attack that integrates gradient
information and extra learning regularization objectives in aspects of model
training properties, which can infer the labels under regression settings
effectively. The comprehensive experiments on various datasets and models have
demonstrated the effectiveness of our proposed attack. We hope our work can
pave the way for future analyses that make the vFL framework more secure.
Related papers
- Training on Fake Labels: Mitigating Label Leakage in Split Learning via Secure Dimension Transformation [10.404379188947383]
Two-party split learning has been proven to survive label inference attacks.
We propose a novel two-party split learning method to defend against existing label inference attacks.
arXiv Detail & Related papers (2024-10-11T09:25:21Z) - LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation [10.224977496821154]
Split Neural Network is popular in industry due to its privacy-preserving characteristics.
malicious participants may still infer label information from the uploaded embeddings, leading to privacy leakage.
We propose a new label obfuscation defense strategy, called LabObf', which randomly maps each original integer-valued label to multiple real-valued soft labels.
arXiv Detail & Related papers (2024-05-27T10:54:42Z) - Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Defending Label Inference Attacks in Split Learning under Regression
Setting [20.77178463903939]
Split Learning is a privacy-preserving method for implementing Vertical Federated Learning.
In this paper, we focus on label inference attacks in Split Learning under regression setting.
We propose Random Label Extension (RLE), where labels are extended to obfuscate the label information contained in the gradients.
To further minimize the impact on the original task, we propose Model-based adaptive Label Extension (MLE), where original labels are preserved in the extended labels and dominate the training process.
arXiv Detail & Related papers (2023-08-18T10:22:31Z) - Partial-Label Regression [54.74984751371617]
Partial-label learning is a weakly supervised learning setting that allows each training example to be annotated with a set of candidate labels.
Previous studies on partial-label learning only focused on the classification setting where candidate labels are all discrete.
In this paper, we provide the first attempt to investigate partial-label regression, where each training example is annotated with a set of real-valued candidate labels.
arXiv Detail & Related papers (2023-06-15T09:02:24Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Does Label Differential Privacy Prevent Label Inference Attacks? [26.87328379562665]
Label differential privacy (label-DP) is a popular framework for training private ML models on datasets with public features and sensitive private labels.
Despite its rigorous privacy guarantee, it has been observed that in practice label-DP does not preclude label inference attacks (LIAs)
arXiv Detail & Related papers (2022-02-25T20:57:29Z) - Gradient Inversion Attack: Leaking Private Labels in Two-Party Split
Learning [12.335698325757491]
We propose a label leakage attack that allows an adversarial input owner to learn the label owner's private labels.
Our attack can uncover the private label data on several multi-class image classification problems and a binary conversion prediction task with near-perfect accuracy.
While this technique is effective for simpler datasets, it significantly degrades utility for datasets with higher input dimensionality.
arXiv Detail & Related papers (2021-11-25T16:09:59Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - Self-Tuning for Data-Efficient Deep Learning [75.34320911480008]
Self-Tuning is a novel approach to enable data-efficient deep learning.
It unifies the exploration of labeled and unlabeled data and the transfer of a pre-trained model.
It outperforms its SSL and TL counterparts on five tasks by sharp margins.
arXiv Detail & Related papers (2021-02-25T14:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.