Secure Vertical Federated Learning Under Unreliable Connectivity
- URL: http://arxiv.org/abs/2305.16794v3
- Date: Sat, 17 Feb 2024 19:56:06 GMT
- Title: Secure Vertical Federated Learning Under Unreliable Connectivity
- Authors: Xinchi Qiu, Heng Pan, Wanru Zhao, Yan Gao, Pedro P.B. Gusmao, William
F. Shen, Chenyang Ma, Nicholas D. Lane
- Abstract summary: We present vFedSec, a first dropout-tolerant VFL protocol.
It achieves secure and efficient model training by using an innovative Secure Layer alongside an embedding-padding technique.
- Score: 22.03946356498099
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most work in privacy-preserving federated learning (FL) has focused on
horizontally partitioned datasets where clients hold the same features and
train complete client-level models independently. However, individual data
points are often scattered across different institutions, known as clients, in
vertical FL (VFL) settings. Addressing this category of FL necessitates the
exchange of intermediate outputs and gradients among participants, resulting in
potential privacy leakage risks and slow convergence rates. Additionally, in
many real-world scenarios, VFL training also faces the acute issue of client
stragglers and drop-outs, a serious challenge that can significantly hinder the
training process but has been largely overlooked in existing studies. In this
work, we present vFedSec, a first dropout-tolerant VFL protocol, which can
support the most generalized vertical framework. It achieves secure and
efficient model training by using an innovative Secure Layer alongside an
embedding-padding technique. We provide theoretical proof that our design
attains enhanced security while maintaining training performance. Empirical
results from extensive experiments also demonstrate vFedSec is robust to client
dropout and provides secure training with negligible computation and
communication overhead. Compared to widely adopted homomorphic encryption (HE)
methods, our approach achieves a remarkable > 690x speedup and reduces
communication costs significantly by > 9.6x.
Related papers
- TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience [26.647028483763137]
We introduce Fast-FedUL, a tailored unlearning method for Federated Learning (FL)
We develop an algorithm to systematically remove the impact of the target client from the trained model.
Experimental results indicate that Fast-FedUL effectively removes almost all traces of the target client, while retaining the knowledge of untargeted clients.
arXiv Detail & Related papers (2024-05-28T10:51:38Z) - Efficient Vertical Federated Learning with Secure Aggregation [10.295508659999783]
We present a novel design for training vertical FL securely and efficiently using state-of-the-art security modules for secure aggregation.
We demonstrate empirically that our method does not impact training performance whilst obtaining 9.1e2 3.8e4 speedup compared to homomorphic encryption (HE)
arXiv Detail & Related papers (2023-05-18T18:08:36Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - H-FL: A Hierarchical Communication-Efficient and Privacy-Protected
Architecture for Federated Learning [0.2741266294612776]
We propose a novel framework called hierarchical federated learning (H-FL) to tackle this challenge.
Considering the degradation of the model performance due to the statistic heterogeneity of the training data, we devise a runtime distribution reconstruction strategy.
In addition, we design a compression-correction mechanism incorporated into H-FL to reduce the communication overhead while not sacrificing the model performance.
arXiv Detail & Related papers (2021-06-01T07:15:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.