Privacy Against Agnostic Inference Attack in Vertical Federated Learning
- URL: http://arxiv.org/abs/2302.05545v1
- Date: Fri, 10 Feb 2023 23:19:30 GMT
- Title: Privacy Against Agnostic Inference Attack in Vertical Federated Learning
- Authors: Morteza Varasteh
- Abstract summary: Two parties collaborate in training a machine learning (ML) model.
One party possesses the ground truth labels of the samples in the training phase.
The other, referred to as the passive party, only shares a separate set of features corresponding to these samples.
- Score: 7.1577508803778045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel form of inference attack in vertical federated learning (VFL) is
proposed, where two parties collaborate in training a machine learning (ML)
model. Logistic regression is considered for the VFL model. One party, referred
to as the active party, possesses the ground truth labels of the samples in the
training phase, while the other, referred to as the passive party, only shares
a separate set of features corresponding to these samples. It is shown that the
active party can carry out inference attacks on both training and prediction
phase samples by acquiring an ML model independently trained on the training
samples available to them. This type of inference attack does not require the
active party to be aware of the score of a specific sample, hence it is
referred to as an agnostic inference attack. It is shown that utilizing the
observed confidence scores during the prediction phase, before the time of the
attack, can improve the performance of the active party's autonomous model, and
thus improve the quality of the agnostic inference attack. As a countermeasure,
privacy-preserving schemes (PPSs) are proposed. While the proposed schemes
preserve the utility of the VFL model, they systematically distort the VFL
parameters corresponding to the passive party's features. The level of the
distortion imposed on the passive party's parameters is adjustable, giving rise
to a trade-off between privacy of the passive party and interpretabiliy of the
VFL outcomes by the active party. The distortion level of the passive party's
parameters could be chosen carefully according to the privacy and
interpretabiliy concerns of the passive and active parties, respectively, with
the hope of keeping both parties (partially) satisfied. Finally, experimental
results demonstrate the effectiveness of the proposed attack and the PPSs.
Related papers
- A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning [54.51890573369637]
We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
arXiv Detail & Related papers (2024-02-23T10:21:07Z) - SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - Robust and IP-Protecting Vertical Federated Learning against Unexpected
Quitting of Parties [29.229942556038676]
Vertical federated learning (VFL) enables a service provider (i.e., active party) who owns labeled features to collaborate with passive parties who possess auxiliary features to improve model performance.
Existing VFL approaches have two major vulnerabilities when passive parties unexpectedly quit in the deployment phase of VFL.
We propose textbfParty-wise Dropout to improve the VFL model's robustness against the unexpected exit of passive parties and a defense method called textbfDIMIP to protect the active party's IP in the deployment phase.
arXiv Detail & Related papers (2023-03-28T19:58:28Z) - Purifier: Defending Data Inference Attacks via Transforming Confidence
Scores [27.330482508047428]
We propose a method, namely PURIFIER, to defend against membership inference attacks.
Experiments show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency.
PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks.
arXiv Detail & Related papers (2022-12-01T16:09:50Z) - Feature Reconstruction Attacks and Countermeasures of DNN training in
Vertical Federated Learning [39.85691324350159]
Federated learning (FL) has increasingly been deployed, in its vertical form, among organizations to facilitate secure collaborative training over siloed data.
Despite the increasing adoption of VFL, it remains largely unknown if and how the active party can extract feature data from the passive party.
This paper makes the first attempt to study the feature security problem of DNN training in VFL.
arXiv Detail & Related papers (2022-10-13T06:23:47Z) - Privacy Against Inference Attacks in Vertical Federated Learning [13.234975857626749]
Vertical federated learning is considered, where an active party, having access to true class labels, wishes to build a classification model by utilizing more features from a passive party.
Several inference attack techniques are proposed that the adversary, i.e., the active party, can employ to reconstruct the passive party's features, regarded as sensitive information.
As a defense mechanism, two privacy-preserving schemes are proposed that worsen the adversary's reconstruction attacks, while preserving the full benefits that VFL brings to the active party.
arXiv Detail & Related papers (2022-07-24T18:33:52Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Feature Inference Attack on Model Predictions in Vertical Federated
Learning [26.7517556631796]
Federated learning (FL) is an emerging paradigm for facilitating multiple organizations' data collaboration without revealing their private data to each other.
This paper presents several feature inference attack methods to investigate the potential privacy leakages in the model prediction stage of vertical FL.
arXiv Detail & Related papers (2020-10-20T09:38:49Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Adaptive Adversarial Logits Pairing [65.51670200266913]
An adversarial training solution Adversarial Logits Pairing (ALP) tends to rely on fewer high-contribution features compared with vulnerable ones.
Motivated by these observations, we design an Adaptive Adversarial Logits Pairing (AALP) solution by modifying the training process and training target of ALP.
AALP consists of an adaptive feature optimization module with Guided Dropout to systematically pursue fewer high-contribution features.
arXiv Detail & Related papers (2020-05-25T03:12:20Z) - Revisiting Membership Inference Under Realistic Assumptions [87.13552321332988]
We study membership inference in settings where some of the assumptions typically used in previous research are relaxed.
This setting is more realistic than the balanced prior setting typically considered by researchers.
We develop a new inference attack based on the intuition that inputs corresponding to training set members will be near a local minimum in the loss function.
arXiv Detail & Related papers (2020-05-21T20:17:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.