Practical One-Shot Federated Learning for Cross-Silo Setting
- URL: http://arxiv.org/abs/2010.01017v2
- Date: Thu, 20 May 2021 13:25:47 GMT
- Title: Practical One-Shot Federated Learning for Cross-Silo Setting
- Authors: Qinbin Li, Bingsheng He, Dawn Song
- Abstract summary: One-shot federated learning is a promising approach to make federated learning applicable in cross-silo setting.
We propose a practical one-shot federated learning algorithm named FedKT.
By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees.
- Score: 114.76232507580067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables multiple parties to collaboratively learn a model
without exchanging their data. While most existing federated learning
algorithms need many rounds to converge, one-shot federated learning (i.e.,
federated learning with a single communication round) is a promising approach
to make federated learning applicable in cross-silo setting in practice.
However, existing one-shot algorithms only support specific models and do not
provide any privacy guarantees, which significantly limit the applications in
practice. In this paper, we propose a practical one-shot federated learning
algorithm named FedKT. By utilizing the knowledge transfer technique, FedKT can
be applied to any classification models and can flexibly achieve differential
privacy guarantees. Our experiments on various tasks show that FedKT can
significantly outperform the other state-of-the-art federated learning
algorithms with a single communication round.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - Collaborating Heterogeneous Natural Language Processing Tasks via
Federated Learning [55.99444047920231]
The proposed ATC framework achieves significant improvements compared with various baseline methods.
We conduct extensive experiments on six widely-used datasets covering both Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks.
arXiv Detail & Related papers (2022-12-12T09:27:50Z) - A Primal-Dual Algorithm for Hybrid Federated Learning [11.955062839855334]
We provide a fast, robust algorithm for hybrid federated learning that hinges on Fenchel Duality.
We also provide privacy considerations and necessary steps to protect client data.
arXiv Detail & Related papers (2022-10-14T21:02:04Z) - Practical Vertical Federated Learning with Unsupervised Representation
Learning [47.77625754666018]
Federated learning enables multiple parties to collaboratively train a machine learning model without sharing their raw data.
We propose a novel communication-efficient vertical federated learning algorithm named FedOnce, which requires only one-shot communication among parties.
Our privacy-preserving technique significantly outperforms the state-of-the-art approaches under the same privacy budget.
arXiv Detail & Related papers (2022-08-13T08:41:32Z) - FLIX: A Simple and Communication-Efficient Alternative to Local Methods
in Federated Learning [4.492444446637857]
Federated learning is an increasingly popular machine learning paradigm in which multiple nodes try to collaboratively learn.
Standard average risk minimization of supervised learning is inadequate in handling several major constraints specific to federated learning.
We introduce a new framework, FLIX, that takes into account the unique challenges brought by federated learning.
arXiv Detail & Related papers (2021-11-22T22:06:58Z) - Federated Self-Supervised Contrastive Learning via Ensemble Similarity
Distillation [42.05438626702343]
This paper investigates the feasibility of learning good representation space with unlabeled client data in a federated scenario.
We propose a novel self-supervised contrastive learning framework that supports architecture-agnostic local training and communication-efficient global aggregation.
arXiv Detail & Related papers (2021-09-29T02:13:22Z) - FedU: A Unified Framework for Federated Multi-Task Learning with
Laplacian Regularization [15.238123204624003]
Federated multi-task learning (FMTL) has emerged as a natural choice to capture the statistical diversity among the clients in federated learning.
To unleash the FMTL beyond statistical diversity, we formulate a new FMTL FedU using Laplacian regularization.
arXiv Detail & Related papers (2021-02-14T13:19:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.