Visual Prompt Based Personalized Federated Learning
- URL: http://arxiv.org/abs/2303.08678v1
- Date: Wed, 15 Mar 2023 15:02:15 GMT
- Title: Visual Prompt Based Personalized Federated Learning
- Authors: Guanghao Li, Wansen Wu, Yan Sun, Li Shen, Baoyuan Wu, Dacheng Tao
- Abstract summary: We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
- Score: 83.04104655903846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a popular paradigm of distributed learning, personalized federated
learning (PFL) allows personalized models to improve generalization ability and
robustness by utilizing knowledge from all distributed clients. Most existing
PFL algorithms tackle personalization in a model-centric way, such as
personalized layer partition, model regularization, and model interpolation,
which all fail to take into account the data characteristics of distributed
clients. In this paper, we propose a novel PFL framework for image
classification tasks, dubbed pFedPT, that leverages personalized visual prompts
to implicitly represent local data distribution information of clients and
provides that information to the aggregation model to help with classification
tasks. Specifically, in each round of pFedPT training, each client generates a
local personalized prompt related to local data distribution. Then, the local
model is trained on the input composed of raw data and a visual prompt to learn
the distribution information contained in the prompt. During model testing, the
aggregated model obtains prior knowledge of the data distributions based on the
prompts, which can be seen as an adaptive fine-tuning of the aggregation model
to improve model performances on different clients. Furthermore, the visual
prompt can be added as an orthogonal method to implement personalization on the
client for existing FL methods to boost their performance. Experiments on the
CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several
state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - Regularizing and Aggregating Clients with Class Distribution for Personalized Federated Learning [0.8287206589886879]
Class-wise Federated Averaging (cwFedAVG) class-wise, creating multiple global models per class on the server.
Each local model integrates these global models weighted by its estimated local class distribution, derived from the L2-norms of deep network weights.
We also newly designed Weight Distribution Regularizer (WDR) to further enhance the accuracy of estimating a local class distribution.
arXiv Detail & Related papers (2024-06-12T01:32:24Z) - Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Efficient Model Personalization in Federated Learning via
Client-Specific Prompt Generation [38.42808389088285]
Federated learning (FL) emerges as a decentralized learning framework which trains models from multiple distributed clients without sharing their data to preserve privacy.
We propose a novel personalized FL framework of client-specific Prompt Generation (pFedPG)
pFedPG learns to deploy a personalized prompt generator at the server for producing client-specific visual prompts that efficiently adapts frozen backbones to local data distributions.
arXiv Detail & Related papers (2023-08-29T15:03:05Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Group Personalized Federated Learning [15.09115201646396]
Federated learning (FL) can help promote data privacy by training a shared model in a de-centralized manner on the physical devices of clients.
In this paper, we present the group personalization approach for applications of FL.
arXiv Detail & Related papers (2022-10-04T19:20:19Z) - Parameterized Knowledge Transfer for Personalized Federated Learning [11.223753730705374]
We propose a novel training framework to employ personalized models for different clients.
It is demonstrated that the proposed framework is the first federated learning paradigm that realizes personalized model training.
arXiv Detail & Related papers (2021-11-04T13:41:45Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.