Efficient Vertical Federated Learning with Secure Aggregation
- URL: http://arxiv.org/abs/2305.11236v1
- Date: Thu, 18 May 2023 18:08:36 GMT
- Title: Efficient Vertical Federated Learning with Secure Aggregation
- Authors: Xinchi Qiu, Heng Pan, Wanru Zhao, Chenyang Ma, Pedro Porto Buarque de
Gusm\~ao, Nicholas D. Lane
- Abstract summary: We present a novel design for training vertical FL securely and efficiently using state-of-the-art security modules for secure aggregation.
We demonstrate empirically that our method does not impact training performance whilst obtaining 9.1e2 3.8e4 speedup compared to homomorphic encryption (HE)
- Score: 10.295508659999783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The majority of work in privacy-preserving federated learning (FL) has been
focusing on horizontally partitioned datasets where clients share the same sets
of features and can train complete models independently. However, in many
interesting problems, such as financial fraud detection and disease detection,
individual data points are scattered across different clients/organizations in
vertical federated learning. Solutions for this type of FL require the exchange
of gradients between participants and rarely consider privacy and security
concerns, posing a potential risk of privacy leakage. In this work, we present
a novel design for training vertical FL securely and efficiently using
state-of-the-art security modules for secure aggregation. We demonstrate
empirically that our method does not impact training performance whilst
obtaining 9.1e2 ~3.8e4 speedup compared to homomorphic encryption (HE).
Related papers
- TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - On Joint Noise Scaling in Differentially Private Federated Learning with Multiple Local Steps [0.5439020425818999]
Federated learning is a distributed learning setting where the main aim is to train machine learning models without having to share raw data.
We show how a simple new analysis allows the parties to perform multiple local optimisation steps while still benefiting from secure aggregation.
arXiv Detail & Related papers (2024-07-27T15:54:58Z) - Secure Vertical Federated Learning Under Unreliable Connectivity [22.03946356498099]
We present vFedSec, a first dropout-tolerant VFL protocol.
It achieves secure and efficient model training by using an innovative Secure Layer alongside an embedding-padding technique.
arXiv Detail & Related papers (2023-05-26T10:17:36Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Dubhe: Towards Data Unbiasedness with Homomorphic Encryption in
Federated Learning Client Selection [16.975086164684882]
Federated learning (FL) is a distributed machine learning paradigm that allows clients to collaboratively train a model over their own local data.
We mathematically demonstrate the cause of performance degradation in FL and examine the performance of FL over various datasets.
We propose a pluggable system-level client selection method named Dubhe, which allows clients to proactively participate in training, preserving their privacy with the assistance of HE.
arXiv Detail & Related papers (2021-09-08T13:00:46Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Local and Central Differential Privacy for Robustness and Privacy in
Federated Learning [13.115388879531967]
Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates.
This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL.
arXiv Detail & Related papers (2020-09-08T07:37:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.