Federated Domain Generalization via Prompt Learning and Aggregation
- URL: http://arxiv.org/abs/2411.10063v1
- Date: Fri, 15 Nov 2024 09:26:00 GMT
- Title: Federated Domain Generalization via Prompt Learning and Aggregation
- Authors: Shuai Gong, Chaoran Cui, Chunyun Zhang, Wenna Wang, Xiushan Nie, Lei Zhu,
- Abstract summary: Federated domain generalization (FedDG) aims to improve the global model generalization in unseen domains.
A common strategy in existing FedDG studies involves sharing domain-specific knowledge among clients.
We introduce prompt learning to adapt pre-trained vision-language models (VLMs) in the FedDG scenario.
- Score: 20.933631678895765
- License:
- Abstract: Federated domain generalization (FedDG) aims to improve the global model generalization in unseen domains by addressing data heterogeneity under privacy-preserving constraints. A common strategy in existing FedDG studies involves sharing domain-specific knowledge among clients, such as spectrum information, class prototypes, and data styles. However, this knowledge is extracted directly from local client samples, and sharing such sensitive information poses a potential risk of data leakage, which might not fully meet the requirements of FedDG. In this paper, we introduce prompt learning to adapt pre-trained vision-language models (VLMs) in the FedDG scenario, and leverage locally learned prompts as a more secure bridge to facilitate knowledge transfer among clients. Specifically, we propose a novel FedDG framework through Prompt Learning and AggregatioN (PLAN), which comprises two training stages to collaboratively generate local prompts and global prompts at each federated round. First, each client performs both text and visual prompt learning using their own data, with local prompts indirectly synchronized by regarding the global prompts as a common reference. Second, all domain-specific local prompts are exchanged among clients and selectively aggregated into the global prompts using lightweight attention-based aggregators. The global prompts are finally applied to adapt VLMs to unseen target domains. As our PLAN framework requires training only a limited number of prompts and lightweight aggregators, it offers notable advantages in computational and communication efficiency for FedDG. Extensive experiments demonstrate the superior generalization ability of PLAN across four benchmark datasets.
Related papers
- FedCCL: Federated Dual-Clustered Feature Contrast Under Domain Heterogeneity [43.71967577443732]
Federated learning (FL) facilitates a privacy-preserving neural network training paradigm through collaboration between edge clients and a central server.
Recent research is limited to simply using averaged signals as a form of regularization and only focusing on one aspect of these non-IID challenges.
We propose a dual-clustered feature contrast-based FL framework with dual focuses.
arXiv Detail & Related papers (2024-04-14T13:56:30Z) - Global and Local Prompts Cooperation via Optimal Transport for Federated Learning [13.652593797756774]
We present Federated Prompts Cooperation via Optimal Transport (FedOTP), which introduces efficient collaborative prompt learning strategies to capture diverse category traits on a per-client basis.
Specifically, for each client, we learn a global prompt to extract consensus knowledge among clients, and a local prompt to capture client-specific category characteristics.
Unbalanced Optimal Transport is then employed to align local visual features with these prompts, striking a balance between global consensus and local personalization.
arXiv Detail & Related papers (2024-02-29T11:43:04Z) - Federated Active Learning for Target Domain Generalisation [20.582521330618768]
We introduce FEDALV, composed of Active Learning (AL) and Federated Domain Generalisation (FDG)
FDG enables generalisation of an image classification model trained from limited source domain client's data without sharing images to an unseen target domain.
FedaLV manages to obtain the performance of the full training target accuracy while sampling as little as 5% of the source client's data.
arXiv Detail & Related papers (2023-12-04T14:50:23Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation [7.052566906745796]
FedLPA is a layer-wise posterior aggregation method for federated learning.
We show that FedLPA significantly improves learning performance over state-of-the-art methods across several metrics.
arXiv Detail & Related papers (2023-09-30T10:51:27Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - Federated Generalized Category Discovery [68.35420359523329]
Generalized category discovery (GCD) aims at grouping unlabeled samples from known and unknown classes.
To meet the recent decentralization trend in the community, we introduce a practical yet challenging task, namely Federated GCD (Fed-GCD)
The goal of Fed-GCD is to train a generic GCD model by client collaboration under the privacy-protected constraint.
arXiv Detail & Related papers (2023-05-23T14:27:41Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.