Secure and Fast Asynchronous Vertical Federated Learning via Cascaded
Hybrid Optimization
- URL: http://arxiv.org/abs/2306.16077v2
- Date: Thu, 29 Jun 2023 14:42:05 GMT
- Title: Secure and Fast Asynchronous Vertical Federated Learning via Cascaded
Hybrid Optimization
- Authors: Ganyu Wang, Qingsong Zhang, Li Xiang, Boyu Wang, Bin Gu, Charles Ling
- Abstract summary: We propose a cascaded hybrid optimization method in Vertical Federated Learning (VFL)
In this method, the downstream models (clients) are trained with zeroth-order optimization (ZOO) to protect privacy.
We show that our method achieves faster convergence than the ZOO-based VFL framework, while maintaining an equivalent level of privacy protection.
- Score: 18.619236705579713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vertical Federated Learning (VFL) attracts increasing attention because it
empowers multiple parties to jointly train a privacy-preserving model over
vertically partitioned data. Recent research has shown that applying
zeroth-order optimization (ZOO) has many advantages in building a practical VFL
algorithm. However, a vital problem with the ZOO-based VFL is its slow
convergence rate, which limits its application in handling modern large models.
To address this problem, we propose a cascaded hybrid optimization method in
VFL. In this method, the downstream models (clients) are trained with ZOO to
protect privacy and ensure that no internal information is shared. Meanwhile,
the upstream model (server) is updated with first-order optimization (FOO)
locally, which significantly improves the convergence rate, making it feasible
to train the large models without compromising privacy and security. We
theoretically prove that our VFL framework converges faster than the ZOO-based
VFL, as the convergence of our framework is not limited by the size of the
server model, making it effective for training large models with the major part
on the server. Extensive experiments demonstrate that our method achieves
faster convergence than the ZOO-based VFL framework, while maintaining an
equivalent level of privacy protection. Moreover, we show that the convergence
of our VFL is comparable to the unsafe FOO-based VFL baseline. Additionally, we
demonstrate that our method makes the training of a large model feasible.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Secure Vertical Federated Learning Under Unreliable Connectivity [22.03946356498099]
We present vFedSec, a first dropout-tolerant VFL protocol.
It achieves secure and efficient model training by using an innovative Secure Layer alongside an embedding-padding technique.
arXiv Detail & Related papers (2023-05-26T10:17:36Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates [25.85564668511386]
We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
arXiv Detail & Related papers (2022-07-29T12:10:36Z) - Sparse Federated Learning with Hierarchical Personalized Models [24.763028713043468]
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
We propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP)
A continuously differentiable approximated L1-norm is also used as the sparse constraint to reduce the communication cost.
arXiv Detail & Related papers (2022-03-25T09:06:42Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - Secure Bilevel Asynchronous Vertical Federated Learning with Backward
Updating [159.48259714642447]
Vertical scalable learning (VFL) attracts increasing attention due to the demands of multi-party collaborative modeling and concerns of privacy leakage.
We propose a novel bftextlevel parallel architecture (VF$bfB2$), under which three new algorithms, including VF$B2$, are proposed.
arXiv Detail & Related papers (2021-03-01T12:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.