Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm
- URL: http://arxiv.org/abs/2203.10329v1
- Date: Sat, 19 Mar 2022 13:55:47 GMT
- Title: Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm
- Authors: Qingsong Zhang, Bin Gu, Zhiyuan Dang, Cheng Deng, Heng Huang
- Abstract summary: A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
- Score: 140.25480610981504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical federated learning (VFL) attracts increasing attention due to the
emerging demands of multi-party collaborative modeling and concerns of privacy
leakage. A complete list of metrics to evaluate VFL algorithms should include
model applicability, privacy security, communication cost, and computation
efficiency, where privacy security is especially important to VFL. However, to
the best of our knowledge, there does not exist a VFL algorithm satisfying all
these criteria very well. To address this challenging problem, in this paper,
we reveal that zeroth-order optimization (ZOO) is a desirable companion for
VFL. Specifically, ZOO can 1) improve the model applicability of VFL framework,
2) prevent VFL framework from privacy leakage under curious, colluding, and
malicious threat models, 3) support inexpensive communication and efficient
computation. Based on that, we propose a novel and practical VFL framework with
black-box models, which is inseparably interconnected to the promising
properties of ZOO. We believe that it takes one stride towards designing a
practical VFL framework matching all the criteria. Under this framework, we
raise two novel {\bf asy}nchronous ze{\bf r}oth-ord{\bf e}r algorithms for {\bf
v}ertical f{\bf e}derated {\bf l}earning (AsyREVEL) with different smoothing
techniques. We theoretically drive the convergence rates of AsyREVEL algorithms
under nonconvex condition. More importantly, we prove the privacy security of
our proposed framework under existing VFL attacks on different levels.
Extensive experiments on benchmark datasets demonstrate the favorable model
applicability, satisfied privacy security, inexpensive communication, efficient
computation, scalability and losslessness of our framework.
Related papers
- Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Secure and Fast Asynchronous Vertical Federated Learning via Cascaded
Hybrid Optimization [18.619236705579713]
We propose a cascaded hybrid optimization method in Vertical Federated Learning (VFL)
In this method, the downstream models (clients) are trained with zeroth-order optimization (ZOO) to protect privacy.
We show that our method achieves faster convergence than the ZOO-based VFL framework, while maintaining an equivalent level of privacy protection.
arXiv Detail & Related papers (2023-06-28T10:18:08Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Vertical Federated Learning: Concepts, Advances and Challenges [18.38260017835129]
We review the concept and algorithms of Vertical Federated Learning (VFL)
We provide an exhaustive categorization for VFL settings and privacy-preserving protocols.
We propose a unified framework, termed VFLow, which considers the VFL problem under communication, computation, privacy, as well as effectiveness and fairness constraints.
arXiv Detail & Related papers (2022-11-23T10:00:06Z) - A Framework for Evaluating Privacy-Utility Trade-off in Vertical Federated Learning [18.046256152691743]
Federated learning (FL) has emerged as a practical solution to tackle data silo issues without compromising user privacy.
VFL matches the enterprises' demands of leveraging more valuable features to build better machine learning models.
Current works in VFL concentrate on developing a specific protection or attack mechanism for a particular VFL algorithm.
arXiv Detail & Related papers (2022-09-08T15:41:31Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - BlindFL: Vertical Federated Machine Learning without Peeking into Your
Data [20.048695060411774]
Vertical federated learning (VFL) describes a case where ML models are built upon the private data of different participated parties.
We introduce BlindFL, a novel framework for VFL training and inference.
We show that BlindFL supports diverse datasets and models efficiently whilst achieving robust privacy guarantees.
arXiv Detail & Related papers (2022-06-16T07:26:50Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - Secure Bilevel Asynchronous Vertical Federated Learning with Backward
Updating [159.48259714642447]
Vertical scalable learning (VFL) attracts increasing attention due to the demands of multi-party collaborative modeling and concerns of privacy leakage.
We propose a novel bftextlevel parallel architecture (VF$bfB2$), under which three new algorithms, including VF$B2$, are proposed.
arXiv Detail & Related papers (2021-03-01T12:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.