WrapperFL: A Model Agnostic Plug-in for Industrial Federated Learning
- URL: http://arxiv.org/abs/2206.10407v1
- Date: Tue, 21 Jun 2022 13:59:11 GMT
- Title: WrapperFL: A Model Agnostic Plug-in for Industrial Federated Learning
- Authors: Xueyang Wu, Shengqi Tan, Qian Xu, Qiang Yang
- Abstract summary: This paper presents a simple yet practical federated learning plug-in inspired by ensemble learning, dubbed WrapperFL.
The WrapperFL works in a plug-and-play way by simply attaching to the input and output interfaces of an existing model, without the need of re-development.
- Score: 10.909577776094782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning, as a privacy-preserving collaborative machine learning
paradigm, has been gaining more and more attention in the industry. With the
huge rise in demand, there have been many federated learning platforms that
allow federated participants to set up and build a federated model from
scratch. However, exiting platforms are highly intrusive, complicated, and hard
to integrate with built machine learning models. For many real-world businesses
that already have mature serving models, existing federated learning platforms
have high entry barriers and development costs. This paper presents a simple
yet practical federated learning plug-in inspired by ensemble learning, dubbed
WrapperFL, allowing participants to build/join a federated system with existing
models at minimal costs. The WrapperFL works in a plug-and-play way by simply
attaching to the input and output interfaces of an existing model, without the
need of re-development, significantly reducing the overhead of manpower and
resources. We verify our proposed method on diverse tasks under heterogeneous
data distributions and heterogeneous models. The experimental results
demonstrate that WrapperFL can be successfully applied to a wide range of
applications under practical settings and improves the local model with
federated learning at a low cost.
Related papers
- A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - FlexModel: A Framework for Interpretability of Distributed Large
Language Models [0.0]
We present FlexModel, a software package providing a streamlined interface for engaging with models distributed across multi- GPU and multi-node configurations.
The library is compatible with existing model distribution libraries and encapsulates PyTorch models.
It exposes user-registerable HookFunctions to facilitate straightforward interaction with distributed model internals.
arXiv Detail & Related papers (2023-12-05T21:19:33Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating
Heterogeneous Lightweight Clients in IoT [34.128674870180596]
Federated learning (FL) enables multiple clients to train models collaboratively without sharing local data.
We propose pFedKnow, which generates lightweight personalized client models via neural network pruning techniques to reduce communication cost.
Experiment results on both image and text datasets show that the proposed pFedKnow outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2023-03-05T13:19:10Z) - Conquering the Communication Constraints to Enable Large Pre-Trained Models in Federated Learning [18.12162136918301]
Federated learning (FL) has emerged as a promising paradigm for enabling the collaborative training of models without centralized access to the raw data on local devices.
Recent state-of-the-art pre-trained models are getting more capable but also have more parameters.
Can we find a solution to enable those strong and readily-available pre-trained models in FL to achieve excellent performance while simultaneously reducing the communication burden?
Specifically, we systemically evaluate the performance of FedPEFT across a variety of client stability, data distribution, and differential privacy settings.
arXiv Detail & Related papers (2022-10-04T16:08:54Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - FLHub: a Federated Learning model sharing service [0.7614628596146599]
We propose Federated Learning Hub (FLHub) as a sharing service for machine learning models.
FLHub allows users to upload, download, and contribute the model developed by other developers similarly to GitHub.
We demonstrate that a forked model can finish training faster than the existing model and that learning progressed more quickly for each federated round.
arXiv Detail & Related papers (2022-02-14T06:02:55Z) - Model-Contrastive Federated Learning [92.9075661456444]
Federated learning enables multiple parties to collaboratively train a machine learning model without communicating their local data.
We propose MOON: model-contrastive federated learning.
Our experiments show that MOON significantly outperforms the other state-of-the-art federated learning algorithms on various image classification tasks.
arXiv Detail & Related papers (2021-03-30T11:16:57Z) - Loosely Coupled Federated Learning Over Generative Models [6.472716351335859]
Federated learning (FL) was proposed to achieve collaborative machine learning among various clients without uploading private data.
This paper proposes Loosely Coupled Federated Learning (LC-FL) to achieve low communication cost and heterogeneous federated learning.
arXiv Detail & Related papers (2020-09-28T01:09:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.