FedGCA: Global Consistent Augmentation Based Single-Source Federated Domain Generalization
- URL: http://arxiv.org/abs/2409.14671v1
- Date: Mon, 23 Sep 2024 02:24:46 GMT
- Title: FedGCA: Global Consistent Augmentation Based Single-Source Federated Domain Generalization
- Authors: Yuan Liu, Shu Wang, Zhe Qu, Xingyu Li, Shichao Kan, Jianxin Wang,
- Abstract summary: Federated Domain Generalization (FedDG) aims to train the global model for generalization ability to unseen domains with multi-domain training samples.
Clients in federated learning networks are often confined to a single, non-IID domain due to inherent sampling and temporal limitations.
We introduce the Federated Global Consistent Augmentation (FedGCA) method, which incorporates a style-complement module to augment data samples with diverse domain styles.
- Score: 29.989092118578103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Domain Generalization (FedDG) aims to train the global model for generalization ability to unseen domains with multi-domain training samples. However, clients in federated learning networks are often confined to a single, non-IID domain due to inherent sampling and temporal limitations. The lack of cross-domain interaction and the in-domain divergence impede the learning of domain-common features and limit the effectiveness of existing FedDG, referred to as the single-source FedDG (sFedDG) problem. To address this, we introduce the Federated Global Consistent Augmentation (FedGCA) method, which incorporates a style-complement module to augment data samples with diverse domain styles. To ensure the effective integration of augmented samples, FedGCA employs both global guided semantic consistency and class consistency, mitigating inconsistencies from local semantics within individual clients and classes across multiple clients. The conducted extensive experiments demonstrate the superiority of FedGCA.
Related papers
- Learning with Alignments: Tackling the Inter- and Intra-domain Shifts for Cross-multidomain Facial Expression Recognition [16.864390181629044]
We propose a novel Learning with Alignments CMFER framework, named LA-CMFER, to handle both inter- and intra-domain shifts.
Based on this, LA-CMFER presents a dual-level inter-domain alignment method to force the model to prioritize hard-to-align samples in knowledge transfer.
To address the intra-domain shifts, LA-CMFER introduces a multi-view intra-domain alignment method with a multi-view consistency constraint.
arXiv Detail & Related papers (2024-07-08T07:43:06Z) - Federated Unsupervised Domain Generalization using Global and Local Alignment of Gradients [20.38994974049825]
We first theoretically establish a connection between domain shift and alignment of gradients in unsupervised learning.
We show that aligning the gradients at both client and server levels can facilitate the generalization of the model to new (target) domains.
We propose a novel method named FedGaLA, which performs gradient alignment at the client level to encourage clients to learn domain-invariant features.
arXiv Detail & Related papers (2024-05-25T17:12:54Z) - Role Prompting Guided Domain Adaptation with General Capability Preserve
for Large Language Models [55.51408151807268]
When tailored to specific domains, Large Language Models (LLMs) tend to experience catastrophic forgetting.
crafting a versatile model for multiple domains simultaneously often results in a decline in overall performance.
We present the RolE Prompting Guided Multi-Domain Adaptation (REGA) strategy.
arXiv Detail & Related papers (2024-03-05T08:22:41Z) - Hypernetwork-Driven Model Fusion for Federated Domain Generalization [26.492360039272942]
Federated Learning (FL) faces significant challenges with domain shifts in heterogeneous data.
We propose a robust framework, coined as hypernetwork-based Federated Fusion (hFedF), using hypernetworks for non-linear aggregation.
Our method employs client-specific embeddings and gradient alignment techniques to manage domain generalization effectively.
arXiv Detail & Related papers (2024-02-10T15:42:03Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - HCDG: A Hierarchical Consistency Framework for Domain Generalization on
Medical Image Segmentation [33.623948922908184]
We present a novel Hierarchical Consistency framework for Domain Generalization (HCDG)
For the Extrinsic Consistency, we leverage the knowledge across multiple source domains to enforce data-level consistency.
For the Intrinsic Consistency, we perform task-level consistency for the same instance under the dual-task scenario.
arXiv Detail & Related papers (2021-09-13T07:07:23Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - Disentanglement-based Cross-Domain Feature Augmentation for Effective
Unsupervised Domain Adaptive Person Re-identification [87.72851934197936]
Unsupervised domain adaptive (UDA) person re-identification (ReID) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain for person matching.
One challenge is how to generate target domain samples with reliable labels for training.
We propose a Disentanglement-based Cross-Domain Feature Augmentation strategy.
arXiv Detail & Related papers (2021-03-25T15:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.