Privacy-Preserving Asynchronous Federated Learning Algorithms for
Multi-Party Vertically Collaborative Learning
- URL: http://arxiv.org/abs/2008.06233v1
- Date: Fri, 14 Aug 2020 08:08:15 GMT
- Title: Privacy-Preserving Asynchronous Federated Learning Algorithms for
Multi-Party Vertically Collaborative Learning
- Authors: Bin Gu, An Xu, Zhouyuan Huo, Cheng Deng, Heng Huang
- Abstract summary: We propose an asynchronous federated SGD (AFSGD-VP) algorithm and its SVRG and SAGA variants on the vertically partitioned data.
To the best of our knowledge, AFSGD-VP and its SVRG and SAGA variants are the first asynchronous federated learning algorithms for vertically partitioned data.
- Score: 151.47900584193025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The privacy-preserving federated learning for vertically partitioned data has
shown promising results as the solution of the emerging multi-party joint
modeling application, in which the data holders (such as government branches,
private finance and e-business companies) collaborate throughout the learning
process rather than relying on a trusted third party to hold data. However,
existing federated learning algorithms for vertically partitioned data are
limited to synchronous computation. To improve the efficiency when the
unbalanced computation/communication resources are common among the parties in
the federated learning system, it is essential to develop asynchronous training
algorithms for vertically partitioned data while keeping the data privacy. In
this paper, we propose an asynchronous federated SGD (AFSGD-VP) algorithm and
its SVRG and SAGA variants on the vertically partitioned data. Moreover, we
provide the convergence analyses of AFSGD-VP and its SVRG and SAGA variants
under the condition of strong convexity. We also discuss their model privacy,
data privacy, computational complexities and communication costs. To the best
of our knowledge, AFSGD-VP and its SVRG and SAGA variants are the first
asynchronous federated learning algorithms for vertically partitioned data.
Extensive experimental results on a variety of vertically partitioned datasets
not only verify the theoretical results of AFSGD-VP and its SVRG and SAGA
variants, but also show that our algorithms have much higher efficiency than
the corresponding synchronous algorithms.
Related papers
- FCNCP: A Coupled Nonnegative CANDECOMP/PARAFAC Decomposition Based on Federated Learning [6.854368686078438]
This study proposes to develop a series of efficient non-negative coupled tensor decomposition algorithm frameworks based on federated learning called FCNCP.
It combines the good discriminative performance of tensor decomposition in high-dimensional data representation and decomposition.
It was found that contralateral stimulation induced more symmetrical components in the activation areas of the left and right hemispheres.
arXiv Detail & Related papers (2024-04-18T04:30:18Z) - Communication-Efficient Hybrid Federated Learning for E-health with Horizontal and Vertical Data Partitioning [67.49221252724229]
E-health allows smart devices and medical institutions to collaboratively collect patients' data, which is trained by Artificial Intelligence (AI) technologies to help doctors make diagnosis.
Applying federated learning in e-health faces many challenges.
Medical data is both horizontally and vertically partitioned.
A naive combination of HFL and VFL has limitations including low training efficiency, unsound convergence analysis, and lack of parameter tuning strategies.
arXiv Detail & Related papers (2024-04-15T19:45:07Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Fair and efficient contribution valuation for vertical federated
learning [49.50442779626123]
Federated learning is a popular technology for training machine learning models on distributed data sources without sharing data.
The Shapley value (SV) is a provably fair contribution valuation metric originated from cooperative game theory.
We propose a contribution valuation metric called vertical federated Shapley value (VerFedSV) based on SV.
arXiv Detail & Related papers (2022-01-07T19:57:15Z) - Resource-constrained Federated Edge Learning with Heterogeneous Data:
Formulation and Analysis [8.863089484787835]
We propose a distributed approximate Newton-type Newton-type training scheme, namely FedOVA, to solve the heterogeneous statistical challenge brought by heterogeneous data.
FedOVA decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning.
arXiv Detail & Related papers (2021-10-14T17:35:24Z) - Secure Bilevel Asynchronous Vertical Federated Learning with Backward
Updating [159.48259714642447]
Vertical scalable learning (VFL) attracts increasing attention due to the demands of multi-party collaborative modeling and concerns of privacy leakage.
We propose a novel bftextlevel parallel architecture (VF$bfB2$), under which three new algorithms, including VF$B2$, are proposed.
arXiv Detail & Related papers (2021-03-01T12:34:53Z) - Improving Federated Relational Data Modeling via Basis Alignment and
Weight Penalty [18.096788806121754]
Federated learning (FL) has attracted increasing attention in recent years.
We present a modified version of the graph neural network algorithm that performs federated modeling over Knowledge Graph (KG)
We propose a novel optimization algorithm, named FedAlign, with 1) optimal transportation (OT) for on-client personalization and 2) weight constraint to speed up the convergence.
Empirical results show that our proposed method outperforms the state-of-the-art FL methods, such as FedAVG and FedProx, with better convergence.
arXiv Detail & Related papers (2020-11-23T12:52:18Z) - Federated Doubly Stochastic Kernel Learning for Vertically Partitioned
Data [93.76907759950608]
We propose a doubly kernel learning algorithm for vertically partitioned data.
We show that FDSKL is significantly faster than state-of-the-art federated learning methods when dealing with kernels.
arXiv Detail & Related papers (2020-08-14T05:46:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.