PFL-MoE: Personalized Federated Learning Based on Mixture of Experts
- URL: http://arxiv.org/abs/2012.15589v1
- Date: Thu, 31 Dec 2020 12:51:14 GMT
- Title: PFL-MoE: Personalized Federated Learning Based on Mixture of Experts
- Authors: Binbin Guo, Yuan Mei, Danyang Xiao, Weigang Wu, Ye Yin, Hongli Chang
- Abstract summary: Federated learning (FL) avoids data sharing among training nodes so as to protect data privacy.
PFL-MoE is a generic approach and can be instantiated by integrating existing PFL algorithms.
We demonstrate the effectiveness of PFL-MoE by training the LeNet-5 and VGG-16 models on the Fashion-MNIST datasets.
- Score: 1.8757823231879849
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is an emerging distributed machine learning paradigm
that avoids data sharing among training nodes so as to protect data privacy.
Under coordination of the FL server, each client conducts model training using
its own computing resource and private data set. The global model can be
created by aggregating the training results of clients. To cope with highly
non-IID data distributions, personalized federated learning (PFL) has been
proposed to improve overall performance by allowing each client to learn a
personalized model. However, one major drawback of a personalized model is the
loss of generalization. To achieve model personalization while maintaining
generalization, in this paper, we propose a new approach, named PFL-MoE, which
mixes outputs of the personalized model and global model via the MoE
architecture. PFL-MoE is a generic approach and can be instantiated by
integrating existing PFL algorithms. Particularly, we propose the PFL-MF
algorithm which is an instance of PFL-MoE based on the freeze-base PFL
algorithm. We further improve PFL-MF by enhancing the decision-making ability
of MoE gating network and propose a variant algorithm PFL-MFE. We demonstrate
the effectiveness of PFL-MoE by training the LeNet-5 and VGG-16 models on the
Fashion-MNIST and CIFAR-10 datasets with non-IID partitions.
Related papers
- DA-PFL: Dynamic Affinity Aggregation for Personalized Federated Learning [13.393529840544117]
Existing personalized federated learning models prefer to aggregate similar clients with similar data distribution to improve the performance of learning models.
We propose a novel Dynamic Affinity-based Personalized Federated Learning model (DA-PFL) to alleviate the class imbalanced problem.
arXiv Detail & Related papers (2024-03-14T11:12:10Z) - PRIOR: Personalized Prior for Reactivating the Information Overlooked in
Federated Learning [16.344719695572586]
We propose a novel scheme to inject personalized prior knowledge into a global model in each client.
At the heart of our proposed approach is a framework, the PFL with Bregman Divergence (pFedBreD)
Our method reaches the state-of-the-art performances on 5 datasets and outperforms other methods by up to 3.5% across 8 benchmarks.
arXiv Detail & Related papers (2023-10-13T15:21:25Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation [24.679535905451758]
PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
arXiv Detail & Related papers (2023-03-27T13:00:20Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.