Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning
- URL: http://arxiv.org/abs/2412.11689v1
- Date: Mon, 16 Dec 2024 12:02:12 GMT
- Title: Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning
- Authors: Andrei Semenov, Philip Zmushko, Alexander Pichugin, Aleksandr Beznosikov,
- Abstract summary: We consider feature reconstruction attacks, a common risk targeting input data compromise.
We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
- Score: 83.90283731845867
- License:
- Abstract: Vertical Federated Learning (VFL) aims to enable collaborative training of deep learning models while maintaining privacy protection. However, the VFL procedure still has components that are vulnerable to attacks by malicious parties. In our work, we consider feature reconstruction attacks, a common risk targeting input data compromise. We theoretically claim that feature reconstruction attacks cannot succeed without knowledge of the prior distribution on data. Consequently, we demonstrate that even simple model architecture transformations can significantly impact the protection of input data during VFL. Confirming these findings with experimental results, we show that MLP-based models are resistant to state-of-the-art feature reconstruction attacks.
Related papers
- Understanding Data Reconstruction Leakage in Federated Learning from a Theoretical Perspective [33.68646515160024]
Federated learning (FL) is an emerging collaborative learning paradigm that aims to protect data privacy.
Recent works show FL algorithms are vulnerable to the serious data reconstruction attacks.
We propose a theoretical framework to understand data reconstruction attacks to FL.
arXiv Detail & Related papers (2024-08-22T04:20:48Z) - UIFV: Data Reconstruction Attack in Vertical Federated Learning [5.404398887781436]
Vertical Federated Learning (VFL) facilitates collaborative machine learning without the need for participants to share raw private data.
Recent studies have revealed privacy risks where adversaries might reconstruct sensitive features through data leakage during the learning process.
Our work exposes severe privacy vulnerabilities within VFL systems that pose real threats to practical VFL applications.
arXiv Detail & Related papers (2024-06-18T13:18:52Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Feature Reconstruction Attacks and Countermeasures of DNN training in
Vertical Federated Learning [39.85691324350159]
Federated learning (FL) has increasingly been deployed, in its vertical form, among organizations to facilitate secure collaborative training over siloed data.
Despite the increasing adoption of VFL, it remains largely unknown if and how the active party can extract feature data from the passive party.
This paper makes the first attempt to study the feature security problem of DNN training in VFL.
arXiv Detail & Related papers (2022-10-13T06:23:47Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Fishing for User Data in Large-Batch Federated Learning via Gradient
Magnification [65.33308059737506]
Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency.
Previous works have exposed privacy vulnerabilities in the FL pipeline by recovering user data from gradient updates.
We introduce a new strategy that dramatically elevates existing attacks to operate on batches of arbitrarily large size.
arXiv Detail & Related papers (2022-02-01T17:26:11Z) - Defending Label Inference and Backdoor Attacks in Vertical Federated
Learning [11.319694528089773]
In collaborative learning, curious parities might be honest but are attempting to infer other parties' private data through inference attacks.
In this paper, we show that private labels can be reconstructed from per-sample gradients.
We introduce a novel technique termed confusional autoencoder (CoAE) based on autoencoder and entropy regularization.
arXiv Detail & Related papers (2021-12-10T09:32:09Z) - Defending against Reconstruction Attack in Vertical Federated Learning [25.182062654794812]
Recently researchers have studied input leakage problems in Federated Learning (FL) where a malicious party can reconstruct sensitive training inputs provided by users from shared gradient.
We show our framework is effective in protecting input privacy while retaining the model utility.
arXiv Detail & Related papers (2021-07-21T06:32:46Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.