Federated Domain Generalization with Data-free On-server Gradient Matching
- URL: http://arxiv.org/abs/2501.14653v1
- Date: Fri, 24 Jan 2025 17:20:22 GMT
- Title: Federated Domain Generalization with Data-free On-server Gradient Matching
- Authors: Trong-Binh Nguyen, Minh-Duong Nguyen, Jinsun Park, Quoc-Viet Pham, Won Joo Hwang,
- Abstract summary: Domain Generalization (DG) aims to learn from multiple known source domains a model that can generalize well to unknown target domains.
In this paper, we introduce a novel approach, dubbed Federated Learning via On-server Matching Gradient (FedOMG), which can emphefficiently leverage domain information from distributed domains.
- Score: 6.817783565501387
- License:
- Abstract: Domain Generalization (DG) aims to learn from multiple known source domains a model that can generalize well to unknown target domains. One of the key approaches in DG is training an encoder which generates domain-invariant representations. However, this approach is not applicable in Federated Domain Generalization (FDG), where data from various domains are distributed across different clients. In this paper, we introduce a novel approach, dubbed Federated Learning via On-server Matching Gradient (FedOMG), which can \emph{efficiently leverage domain information from distributed domains}. Specifically, we utilize the local gradients as information about the distributed models to find an invariant gradient direction across all domains through gradient inner product maximization. The advantages are two-fold: 1) FedOMG can aggregate the characteristics of distributed models on the centralized server without incurring any additional communication cost, and 2) FedOMG is orthogonal to many existing FL/FDG methods, allowing for additional performance improvements by being seamlessly integrated with them. Extensive experimental evaluations on various settings to demonstrate the robustness of FedOMG compared to other FL/FDG baselines. Our method outperforms recent SOTA baselines on four FL benchmark datasets (MNIST, EMNIST, CIFAR-10, and CIFAR-100), and three FDG benchmark datasets (PACS, VLCS, and OfficeHome).
Related papers
- Hypernetwork-Driven Model Fusion for Federated Domain Generalization [26.492360039272942]
Federated Learning (FL) faces significant challenges with domain shifts in heterogeneous data.
We propose a robust framework, coined as hypernetwork-based Federated Fusion (hFedF), using hypernetworks for non-linear aggregation.
Our method employs client-specific embeddings and gradient alignment techniques to manage domain generalization effectively.
arXiv Detail & Related papers (2024-02-10T15:42:03Z) - StableFDG: Style and Attention Based Learning for Federated Domain
Generalization [36.173582743028625]
We propose a style and attention based learning strategy for accomplishing federated domain generalization.
Style-based learning enables each client to explore novel styles beyond the original source domains in its local dataset.
Our second contribution is an attention-based feature highlighter, which captures the similarities between the features of data samples in the same class.
arXiv Detail & Related papers (2023-11-01T02:17:01Z) - FIXED: Frustratingly Easy Domain Generalization with Mixup [53.782029033068675]
Domain generalization (DG) aims to learn a generalizable model from multiple training domains such that it can perform well on unseen target domains.
A popular strategy is to augment training data to benefit generalization through methods such as Mixupcitezhang 2018mixup.
We propose a simple yet effective enhancement for Mixup-based DG, namely domain-invariant Feature mIXup (FIX)
Our approach significantly outperforms nine state-of-the-art related methods, beating the best performing baseline by 6.5% on average in terms of test accuracy.
arXiv Detail & Related papers (2022-11-07T09:38:34Z) - Learning Gradient-based Mixup towards Flatter Minima for Domain
Generalization [44.04047359057987]
We develop a new domain generalization algorithm named Flatness-aware Gradient-based Mixup (FGMix)
FGMix learns the similarity function towards flatter minima for better generalization.
On the DomainBed benchmark, we validate the efficacy of various designs of FGMix and demonstrate its superiority over other DG algorithms.
arXiv Detail & Related papers (2022-09-29T13:01:14Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Federated Learning with Domain Generalization [11.92860245410696]
Federated Learning enables a group of clients to jointly train a machine learning model with the help of a centralized server.
In practice, the model trained over multiple source domains may have poor generalization performance on unseen target domains.
We propose FedADG to equip federated learning with domain generalization capability.
arXiv Detail & Related papers (2021-11-20T01:02:36Z) - Robust Domain-Free Domain Generalization with Class-aware Alignment [4.442096198968069]
Domain-Free Domain Generalization (DFDG) is a model-agnostic method to achieve better generalization performance on the unseen test domain.
DFDG uses novel strategies to learn domain-invariant class-discriminative features.
It obtains competitive performance on both time series sensor and image classification public datasets.
arXiv Detail & Related papers (2021-02-17T17:46:06Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - Cross-Domain Facial Expression Recognition: A Unified Evaluation
Benchmark and Adversarial Graph Learning [85.6386289476598]
We develop a novel adversarial graph representation adaptation (AGRA) framework for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair evaluations on several popular benchmarks and show that the proposed AGRA framework outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T15:00:31Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.