Defending against Reconstruction Attack in Vertical Federated Learning
- URL: http://arxiv.org/abs/2107.09898v1
- Date: Wed, 21 Jul 2021 06:32:46 GMT
- Title: Defending against Reconstruction Attack in Vertical Federated Learning
- Authors: Jiankai Sun and Yuanshun Yao and Weihao Gao and Junyuan Xie and Chong
Wang
- Abstract summary: Recently researchers have studied input leakage problems in Federated Learning (FL) where a malicious party can reconstruct sensitive training inputs provided by users from shared gradient.
We show our framework is effective in protecting input privacy while retaining the model utility.
- Score: 25.182062654794812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently researchers have studied input leakage problems in Federated
Learning (FL) where a malicious party can reconstruct sensitive training inputs
provided by users from shared gradient. It raises concerns about FL since input
leakage contradicts the privacy-preserving intention of using FL. Despite a
relatively rich literature on attacks and defenses of input reconstruction in
Horizontal FL, input leakage and protection in vertical FL starts to draw
researcher's attention recently. In this paper, we study how to defend against
input leakage attacks in Vertical FL. We design an adversarial training-based
framework that contains three modules: adversarial reconstruction, noise
regularization, and distance correlation minimization. Those modules can not
only be employed individually but also applied together since they are
independent to each other. Through extensive experiments on a large-scale
industrial online advertising dataset, we show our framework is effective in
protecting input privacy while retaining the model utility.
Related papers
- Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.
We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Feature Reconstruction Attacks and Countermeasures of DNN training in
Vertical Federated Learning [39.85691324350159]
Federated learning (FL) has increasingly been deployed, in its vertical form, among organizations to facilitate secure collaborative training over siloed data.
Despite the increasing adoption of VFL, it remains largely unknown if and how the active party can extract feature data from the passive party.
This paper makes the first attempt to study the feature security problem of DNN training in VFL.
arXiv Detail & Related papers (2022-10-13T06:23:47Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Provable Defense against Privacy Leakage in Federated Learning from
Representation Perspective [47.23145404191034]
Federated learning (FL) is a popular distributed learning framework that can reduce privacy risks by not explicitly sharing private data.
Recent works demonstrated that sharing model updates makes FL vulnerable to inference attacks.
We show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
arXiv Detail & Related papers (2020-12-08T20:42:12Z) - Defending against Backdoors in Federated Learning with Robust Learning
Rate [25.74681620689152]
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data.
In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification.
We propose a lightweight defense that requires minimal change to the FL protocol.
arXiv Detail & Related papers (2020-07-07T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.