A Framework for Evaluating Privacy-Utility Trade-off in Vertical Federated Learning
- URL: http://arxiv.org/abs/2209.03885v4
- Date: Sun, 4 Aug 2024 23:45:40 GMT
- Title: A Framework for Evaluating Privacy-Utility Trade-off in Vertical Federated Learning
- Authors: Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, Qiang Yang,
- Abstract summary: Federated learning (FL) has emerged as a practical solution to tackle data silo issues without compromising user privacy.
VFL matches the enterprises' demands of leveraging more valuable features to build better machine learning models.
Current works in VFL concentrate on developing a specific protection or attack mechanism for a particular VFL algorithm.
- Score: 18.046256152691743
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) has emerged as a practical solution to tackle data silo issues without compromising user privacy. One of its variants, vertical federated learning (VFL), has recently gained increasing attention as the VFL matches the enterprises' demands of leveraging more valuable features to build better machine learning models while preserving user privacy. Current works in VFL concentrate on developing a specific protection or attack mechanism for a particular VFL algorithm. In this work, we propose an evaluation framework that formulates the privacy-utility evaluation problem. We then use this framework as a guide to comprehensively evaluate a broad range of protection mechanisms against most of the state-of-the-art privacy attacks for three widely deployed VFL algorithms. These evaluations may help FL practitioners select appropriate protection mechanisms given specific requirements. Our evaluation results demonstrate that: the model inversion and most of the label inference attacks can be thwarted by existing protection mechanisms; the model completion (MC) attack is difficult to be prevented, which calls for more advanced MC-targeted protection mechanisms. Based on our evaluation results, we offer concrete advice on improving the privacy-preserving capability of VFL systems. The code is available at https://github.com/yankang18/Attack-Defense-VFL
Related papers
- UIFV: Data Reconstruction Attack in Vertical Federated Learning [5.404398887781436]
Vertical Federated Learning (VFL) facilitates collaborative machine learning without the need for participants to share raw private data.
Recent studies have revealed privacy risks where adversaries might reconstruct sensitive features through data leakage during the learning process.
Our work exposes severe privacy vulnerabilities within VFL systems that pose real threats to practical VFL applications.
arXiv Detail & Related papers (2024-06-18T13:18:52Z) - Universal Adversarial Backdoor Attacks to Fool Vertical Federated
Learning in Cloud-Edge Collaboration [13.067285306737675]
This paper investigates the vulnerability of vertical federated learning (VFL) in the context of binary classification tasks.
We introduce a universal adversarial backdoor (UAB) attack to poison the predictions of VFL.
Our approach surpasses existing state-of-the-art methods, achieving up to 100% backdoor task performance.
arXiv Detail & Related papers (2023-04-22T15:31:15Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - Vertical Federated Learning: Concepts, Advances and Challenges [18.38260017835129]
We review the concept and algorithms of Vertical Federated Learning (VFL)
We provide an exhaustive categorization for VFL settings and privacy-preserving protocols.
We propose a unified framework, termed VFLow, which considers the VFL problem under communication, computation, privacy, as well as effectiveness and fairness constraints.
arXiv Detail & Related papers (2022-11-23T10:00:06Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - Trading Off Privacy, Utility and Efficiency in Federated Learning [22.53326117450263]
We formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction.
We analyze the lower bounds for the privacy leakage, utility loss and efficiency reduction for several widely-adopted protection mechanisms.
arXiv Detail & Related papers (2022-09-01T05:20:04Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.