Parameterized Knowledge Transfer for Personalized Federated Learning
- URL: http://arxiv.org/abs/2111.02862v1
- Date: Thu, 4 Nov 2021 13:41:45 GMT
- Title: Parameterized Knowledge Transfer for Personalized Federated Learning
- Authors: Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wencao Xu, Feijie Wu
- Abstract summary: We propose a novel training framework to employ personalized models for different clients.
It is demonstrated that the proposed framework is the first federated learning paradigm that realizes personalized model training.
- Score: 11.223753730705374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, personalized federated learning (pFL) has attracted
increasing attention for its potential in dealing with statistical
heterogeneity among clients. However, the state-of-the-art pFL methods rely on
model parameters aggregation at the server side, which require all models to
have the same structure and size, and thus limits the application for more
heterogeneous scenarios. To deal with such model constraints, we exploit the
potentials of heterogeneous model settings and propose a novel training
framework to employ personalized models for different clients. Specifically, we
formulate the aggregation procedure in original pFL into a personalized group
knowledge transfer training algorithm, namely, KT-pFL, which enables each
client to maintain a personalized soft prediction at the server side to guide
the others' local training. KT-pFL updates the personalized soft prediction of
each client by a linear combination of all local soft predictions using a
knowledge coefficient matrix, which can adaptively reinforce the collaboration
among clients who own similar data distribution. Furthermore, to quantify the
contributions of each client to others' personalized training, the knowledge
coefficient matrix is parameterized so that it can be trained simultaneously
with the models. The knowledge coefficient matrix and the model parameters are
alternatively updated in each round following the gradient descent way.
Extensive experiments on various datasets (EMNIST, Fashion\_MNIST, CIFAR-10)
are conducted under different settings (heterogeneous models and data
distributions). It is demonstrated that the proposed framework is the first
federated learning paradigm that realizes personalized model training via
parameterized group knowledge transfer while achieving significant performance
gain comparing with state-of-the-art algorithms.
Related papers
- Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - DA-PFL: Dynamic Affinity Aggregation for Personalized Federated Learning [13.393529840544117]
Existing personalized federated learning models prefer to aggregate similar clients with similar data distribution to improve the performance of learning models.
We propose a novel Dynamic Affinity-based Personalized Federated Learning model (DA-PFL) to alleviate the class imbalanced problem.
arXiv Detail & Related papers (2024-03-14T11:12:10Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedDWA: Personalized Federated Learning with Dynamic Weight Adjustment [20.72576355616359]
We propose a new PFL algorithm called emphFedDWA (Federated Learning with Dynamic Weight Adjustment) to address the problem.
FedDWA computes personalized aggregation weights based on collected models from clients.
We conduct extensive experiments using five real datasets and the results demonstrate that FedDWA can significantly reduce the communication traffic and achieve much higher model accuracy than the state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-10T13:12:07Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - QuPeD: Quantized Personalization via Distillation with Applications to
Federated Learning [8.420943739336067]
federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server.
We introduce a textitquantized and textitpersonalized FL algorithm QuPeD that facilitates collective (personalized model compression) training.
Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
arXiv Detail & Related papers (2021-07-29T10:55:45Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.