Robust and IP-Protecting Vertical Federated Learning against Unexpected
Quitting of Parties
- URL: http://arxiv.org/abs/2303.18178v1
- Date: Tue, 28 Mar 2023 19:58:28 GMT
- Title: Robust and IP-Protecting Vertical Federated Learning against Unexpected
Quitting of Parties
- Authors: Jingwei Sun, Zhixu Du, Anna Dai, Saleh Baghersalimi, Alireza
Amirshahi, David Atienza, Yiran Chen
- Abstract summary: Vertical federated learning (VFL) enables a service provider (i.e., active party) who owns labeled features to collaborate with passive parties who possess auxiliary features to improve model performance.
Existing VFL approaches have two major vulnerabilities when passive parties unexpectedly quit in the deployment phase of VFL.
We propose textbfParty-wise Dropout to improve the VFL model's robustness against the unexpected exit of passive parties and a defense method called textbfDIMIP to protect the active party's IP in the deployment phase.
- Score: 29.229942556038676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical federated learning (VFL) enables a service provider (i.e., active
party) who owns labeled features to collaborate with passive parties who
possess auxiliary features to improve model performance. Existing VFL
approaches, however, have two major vulnerabilities when passive parties
unexpectedly quit in the deployment phase of VFL - severe performance
degradation and intellectual property (IP) leakage of the active party's
labels. In this paper, we propose \textbf{Party-wise Dropout} to improve the
VFL model's robustness against the unexpected exit of passive parties and a
defense method called \textbf{DIMIP} to protect the active party's IP in the
deployment phase. We evaluate our proposed methods on multiple datasets against
different inference attacks. The results show that Party-wise Dropout
effectively maintains model performance after the passive party quits, and
DIMIP successfully disguises label information from the passive party's feature
extractor, thereby mitigating IP leakage.
Related papers
- R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning [97.49610356913874]
We propose a robust test-time prompt tuning (R-TPT) for vision-language models (VLMs)
R-TPT mitigates the impact of adversarial attacks during the inference stage.
We introduce a plug-and-play reliability-based weighted ensembling strategy to strengthen the defense.
arXiv Detail & Related papers (2025-04-15T13:49:31Z) - Vision-Language Model IP Protection via Prompt-based Learning [52.783709712318405]
We introduce IP-CLIP, a lightweight IP protection strategy tailored to vision-language models (VLMs)
By leveraging the frozen visual backbone of CLIP, we extract both image style and content information, incorporating them into the learning of IP prompt.
This strategy acts as a robust barrier, effectively preventing the unauthorized transfer of features from authorized domains to unauthorized ones.
arXiv Detail & Related papers (2025-03-04T08:31:12Z) - Retention Score: Quantifying Jailbreak Risks for Vision Language Models [60.48306899271866]
Vision-Language Models (VLMs) are integrated with Large Language Models (LLMs) to enhance multi-modal machine learning capabilities.
This paper aims to assess the resilience of VLMs against jailbreak attacks that can compromise model safety compliance and result in harmful outputs.
To evaluate a VLM's ability to maintain its robustness against adversarial input perturbations, we propose a novel metric called the textbfRetention Score.
arXiv Detail & Related papers (2024-12-23T13:05:51Z) - Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.
We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning [54.51890573369637]
We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
arXiv Detail & Related papers (2024-02-23T10:21:07Z) - Incentive Allocation in Vertical Federated Learning Based on Bankruptcy
Problem [0.0]
Vertical federated learning (VFL) is a promising approach for collaboratively training machine learning models using private data partitioned vertically across different parties.
In this paper, we focus on the problem of allocating incentives to the passive parties by the active party based on their contributions to the VFL process.
We formulate this problem as a variant of the Nucleolus game theory concept, known as the Bankruptcy Problem, and solve it using the Talmud's division rule.
arXiv Detail & Related papers (2023-07-07T11:08:18Z) - Privacy Against Agnostic Inference Attack in Vertical Federated Learning [7.1577508803778045]
Two parties collaborate in training a machine learning (ML) model.
One party possesses the ground truth labels of the samples in the training phase.
The other, referred to as the passive party, only shares a separate set of features corresponding to these samples.
arXiv Detail & Related papers (2023-02-10T23:19:30Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Feature Reconstruction Attacks and Countermeasures of DNN training in
Vertical Federated Learning [39.85691324350159]
Federated learning (FL) has increasingly been deployed, in its vertical form, among organizations to facilitate secure collaborative training over siloed data.
Despite the increasing adoption of VFL, it remains largely unknown if and how the active party can extract feature data from the passive party.
This paper makes the first attempt to study the feature security problem of DNN training in VFL.
arXiv Detail & Related papers (2022-10-13T06:23:47Z) - Privacy Against Inference Attacks in Vertical Federated Learning [13.234975857626749]
Vertical federated learning is considered, where an active party, having access to true class labels, wishes to build a classification model by utilizing more features from a passive party.
Several inference attack techniques are proposed that the adversary, i.e., the active party, can employ to reconstruct the passive party's features, regarded as sensitive information.
As a defense mechanism, two privacy-preserving schemes are proposed that worsen the adversary's reconstruction attacks, while preserving the full benefits that VFL brings to the active party.
arXiv Detail & Related papers (2022-07-24T18:33:52Z) - Defending Label Inference and Backdoor Attacks in Vertical Federated
Learning [11.319694528089773]
In collaborative learning, curious parities might be honest but are attempting to infer other parties' private data through inference attacks.
In this paper, we show that private labels can be reconstructed from per-sample gradients.
We introduce a novel technique termed confusional autoencoder (CoAE) based on autoencoder and entropy regularization.
arXiv Detail & Related papers (2021-12-10T09:32:09Z) - Exploiting Submodular Value Functions For Scaling Up Active Perception [60.81276437097671]
In active perception tasks, agent aims to select sensory actions that reduce uncertainty about one or more hidden variables.
Partially observable Markov decision processes (POMDPs) provide a natural model for such problems.
As the number of sensors available to the agent grows, the computational cost of POMDP planning grows exponentially.
arXiv Detail & Related papers (2020-09-21T09:11:36Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.