Mutual Information Regularization for Vertical Federated Learning
- URL: http://arxiv.org/abs/2301.01142v1
- Date: Sun, 1 Jan 2023 02:03:34 GMT
- Title: Mutual Information Regularization for Vertical Federated Learning
- Authors: Tianyuan Zou, Yang Liu, Ya-Qin Zhang
- Abstract summary: Vertical Federated Learning (VFL) is widely utilized in real-world applications to enable collaborative learning while protecting data privacy and safety.
Previous works show that parties without labels (passive parties) in VFL can infer the sensitive label information owned by the party with labels (active party) or execute backdoor attacks to VFL.
We propose a new general defense method which limits the mutual information between private raw data, including both features and labels, and intermediate outputs to achieve a better trade-off between model utility and privacy.
- Score: 6.458078197870505
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Vertical Federated Learning (VFL) is widely utilized in real-world
applications to enable collaborative learning while protecting data privacy and
safety. However, previous works show that parties without labels (passive
parties) in VFL can infer the sensitive label information owned by the party
with labels (active party) or execute backdoor attacks to VFL. Meanwhile,
active party can also infer sensitive feature information from passive party.
All these pose new privacy and security challenges to VFL systems. We propose a
new general defense method which limits the mutual information between private
raw data, including both features and labels, and intermediate outputs to
achieve a better trade-off between model utility and privacy. We term this
defense Mutual Information Regularization Defense (MID). We theoretically and
experimentally testify the effectiveness of our MID method in defending
existing attacks in VFL, including label inference attacks, backdoor attacks
and feature reconstruction attacks.
Related papers
- Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - KDk: A Defense Mechanism Against Label Inference Attacks in Vertical Federated Learning [2.765106384328772]
In a Vertical Federated Learning (VFL) scenario, the labels of the samples are kept private from all the parties except for the aggregating server, that is the label owner.
Recent works discovered that by exploiting gradient information returned by the server to bottom models, an adversary can infer the private labels.
We propose a novel framework called KDk, that combines Knowledge Distillation and k-anonymity to provide a defense mechanism.
arXiv Detail & Related papers (2024-04-18T17:51:02Z) - Task-Agnostic Privacy-Preserving Representation Learning for Federated Learning Against Attribute Inference Attacks [21.83308540799076]
Federated learning (FL) has been widely studied recently due to its property to collaboratively train data from different devices without sharing the raw data.
Recent studies show that an adversary can still be possible to infer private information about devices' data, e.g., sensitive attributes such as income, race, and sexual orientation.
We develop a task-agnostic privacy-preserving presentation learning method for FL (bf TAPPFL) against attribute inference attacks.
arXiv Detail & Related papers (2023-12-12T05:17:34Z) - Practical and General Backdoor Attacks against Vertical Federated
Learning [3.587415228422117]
Federated learning (FL) aims to facilitate data collaboration across multiple organizations without exposing data privacy.
BadVFL is a novel and practical approach to inject backdoor triggers into victim models without label information.
BadVFL achieves over 93% attack success rate with only 1% poisoning rate.
arXiv Detail & Related papers (2023-06-19T07:30:01Z) - BadVFL: Backdoor Attacks in Vertical Federated Learning [22.71527711053385]
Federated learning (FL) enables multiple parties to collaboratively train a machine learning model without sharing their data.
In this paper, we focus on robustness in VFL, in particular, on backdoor attacks.
We present a first-of-its-kind clean-label backdoor attack in VFL, which consists of two phases: a label inference and a backdoor phase.
arXiv Detail & Related papers (2023-04-18T09:22:32Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings [56.93025161787725]
Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
arXiv Detail & Related papers (2021-12-26T16:50:42Z) - Defending Label Inference and Backdoor Attacks in Vertical Federated
Learning [11.319694528089773]
In collaborative learning, curious parities might be honest but are attempting to infer other parties' private data through inference attacks.
In this paper, we show that private labels can be reconstructed from per-sample gradients.
We introduce a novel technique termed confusional autoencoder (CoAE) based on autoencoder and entropy regularization.
arXiv Detail & Related papers (2021-12-10T09:32:09Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.