Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization
- URL: http://arxiv.org/abs/2203.09747v1
- Date: Fri, 18 Mar 2022 04:58:34 GMT
- Title: Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization
- Authors: Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou
- Abstract summary: Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
- Score: 107.72786199113183
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) provides a distributed learning framework for
multiple participants to collaborate learning without sharing raw data. In many
practical FL scenarios, participants have heterogeneous resources due to
disparities in hardware and inference dynamics that require quickly loading
models of different sizes and levels of robustness. The heterogeneity and
dynamics together impose significant challenges to existing FL approaches and
thus greatly limit FL's applicability. In this paper, we propose a novel
Split-Mix FL strategy for heterogeneous participants that, once training is
done, provides in-situ customization of model sizes and robustness.
Specifically, we achieve customization by learning a set of base sub-networks
of different sizes and robustness levels, which are later aggregated on-demand
according to inference requirements. This split-mix strategy achieves
customization with high efficiency in communication, storage, and inference.
Extensive experiments demonstrate that our method provides better in-situ
customization than the existing heterogeneous-architecture FL methods. Codes
and pre-trained models are available: https://github.com/illidanlab/SplitMix.
Related papers
- Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data [10.64629029156029]
We introduce an innovative personalized Federated Learning framework, Multi-level Personalized Federated Learning (MuPFL)
MuPFL integrates three pivotal modules: Biased Activation Value Dropout (BAVD), Adaptive Cluster-based Model Update (ACMU) and Prior Knowledge-assisted Fine-tuning (PKCF)
Experiments on diverse real-world datasets show that MuPFL consistently outperforms state-of-the-art baselines, even under extreme non-i.i.d. and long-tail conditions.
arXiv Detail & Related papers (2024-05-10T11:52:53Z) - A Survey on Efficient Federated Learning Methods for Foundation Model
Training [66.19763977571114]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - NeFL: Nested Federated Learning for Heterogeneous Clients [48.160716521203256]
Federated learning (FL) is a promising approach in distributed learning keeping privacy.
During the training pipeline of FL, slow or incapable clients (i.e., stragglers) slow down the total training time and degrade performance.
We propose nested federated learning (NeFL), a framework that efficiently divides a model into submodels using both depthwise and widthwise scaling.
arXiv Detail & Related papers (2023-08-15T13:29:14Z) - Multi-Model Federated Learning with Provable Guarantees [19.470024548995717]
Federated Learning (FL) is a variant of distributed learning where devices collaborate to learn a model without sharing their data with the central server or each other.
We refer to the process of multiple independent clients simultaneously in a federated setting using a common pool of clients as a multi-model edge FL.
arXiv Detail & Related papers (2022-07-09T19:47:52Z) - CoFED: Cross-silo Heterogeneous Federated Multi-task Learning via
Co-training [11.198612582299813]
Federated Learning (FL) is a machine learning technique that enables participants to train high-quality models collaboratively without exchanging their private data.
We propose a communication-efficient FL scheme, CoFED, based on pseudo-labeling unlabeled data like co-training.
Experimental results show that CoFED achieves better performance with a lower communication cost.
arXiv Detail & Related papers (2022-02-17T11:34:20Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Hybrid Federated Learning: Algorithms and Implementation [61.0640216394349]
Federated learning (FL) is a recently proposed distributed machine learning paradigm dealing with distributed and private data sets.
We propose a new model-matching-based problem formulation for hybrid FL.
We then propose an efficient algorithm that can collaboratively train the global and local models to deal with full and partial featured data.
arXiv Detail & Related papers (2020-12-22T23:56:03Z) - Loosely Coupled Federated Learning Over Generative Models [6.472716351335859]
Federated learning (FL) was proposed to achieve collaborative machine learning among various clients without uploading private data.
This paper proposes Loosely Coupled Federated Learning (LC-FL) to achieve low communication cost and heterogeneous federated learning.
arXiv Detail & Related papers (2020-09-28T01:09:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.