pFedES: Model Heterogeneous Personalized Federated Learning with Feature
Extractor Sharing
- URL: http://arxiv.org/abs/2311.06879v1
- Date: Sun, 12 Nov 2023 15:43:39 GMT
- Title: pFedES: Model Heterogeneous Personalized Federated Learning with Feature
Extractor Sharing
- Authors: Liping Yi, Han Yu, Gang Wang, Xiaoguang Liu
- Abstract summary: We propose a model-heterogeneous personalized Federated learning approach based on feature extractor sharing.
It incorporates a small homogeneous feature extractor into each client's heterogeneous local model.
It achieves 1.61% higher test accuracy, while reducing communication and computation costs by 99.6% and 82.9%, respectively.
- Score: 19.403843478569303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a privacy-preserving collaborative machine learning paradigm, federated
learning (FL) has attracted significant interest from academia and the industry
alike. To allow each data owner (a.k.a., FL clients) to train a heterogeneous
and personalized local model based on its local data distribution, system
resources and requirements on model structure, the field of model-heterogeneous
personalized federated learning (MHPFL) has emerged. Existing MHPFL approaches
either rely on the availability of a public dataset with special
characteristics to facilitate knowledge transfer, incur high computation and
communication costs, or face potential model leakage risks. To address these
limitations, we propose a model-heterogeneous personalized Federated learning
approach based on feature Extractor Sharing (pFedES). It incorporates a small
homogeneous feature extractor into each client's heterogeneous local model.
Clients train them via the proposed iterative learning method to enable the
exchange of global generalized knowledge and local personalized knowledge. The
small local homogeneous extractors produced after local training are uploaded
to the FL server and for aggregation to facilitate easy knowledge sharing among
clients. We theoretically prove that pFedES can converge over wall-to-wall
time. Extensive experiments on two real-world datasets against six
state-of-the-art methods demonstrate that pFedES builds the most accurate
model, while incurring low communication and computation costs. Compared with
the best-performing baseline, it achieves 1.61% higher test accuracy, while
reducing communication and computation costs by 99.6% and 82.9%, respectively.
Related papers
- pFedAFM: Adaptive Feature Mixture for Batch-Level Personalization in Heterogeneous Federated Learning [34.01721941230425]
We propose a model-heterogeneous personalized Federated learning approach with Adaptive Feature Mixture (pFedAFM) for supervised learning tasks.
It significantly outperforms 7 state-of-the-art MHPFL methods, achieving up to 7.93% accuracy improvement.
arXiv Detail & Related papers (2024-04-27T09:52:59Z) - pFedMoE: Data-Level Personalization with Mixture of Experts for
Model-Heterogeneous Personalized Federated Learning [35.72303739409116]
We propose a model-heterogeneous personalized Federated learning with Mixture of Experts (pFedMoE) method.
It assigns a shared homogeneous small feature extractor and a local gating network for each client's local heterogeneous large model.
Overall, pFedMoE enhances local model personalization at a fine-grained data level.
arXiv Detail & Related papers (2024-02-02T12:09:20Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - pFedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA
Tuning [35.59830784463706]
Federated learning (FL) is an emerging machine learning paradigm in which a central server coordinates multiple participants (clients) collaboratively to train on decentralized data.
We propose a novel and efficient model-heterogeneous personalized Federated learning framework based on LoRA tuning (pFedLoRA)
Experiments on two benchmark datasets demonstrate that pFedLoRA outperforms six state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-20T05:24:28Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.