Federated Learning with Domain Generalization
- URL: http://arxiv.org/abs/2111.10487v1
- Date: Sat, 20 Nov 2021 01:02:36 GMT
- Title: Federated Learning with Domain Generalization
- Authors: Liling Zhang, Xinyu Lei, Yichun Shi, Hongyu Huang and Chao Chen
- Abstract summary: Federated Learning enables a group of clients to jointly train a machine learning model with the help of a centralized server.
In practice, the model trained over multiple source domains may have poor generalization performance on unseen target domains.
We propose FedADG to equip federated learning with domain generalization capability.
- Score: 11.92860245410696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) enables a group of clients to jointly train a machine
learning model with the help of a centralized server. Clients do not need to
submit their local data to the server during training, and hence the local
training data of clients is protected. In FL, distributed clients collect their
local data independently, so the dataset of each client may naturally form a
distinct source domain. In practice, the model trained over multiple source
domains may have poor generalization performance on unseen target domains. To
address this issue, we propose FedADG to equip federated learning with domain
generalization capability. FedADG employs the federated adversarial learning
approach to measure and align the distributions among different source domains
via matching each distribution to a reference distribution. The reference
distribution is adaptively generated (by accommodating all source domains) to
minimize the domain shift distance during alignment. In FedADG, the alignment
is fine-grained since each class is aligned independently. In this way, the
learned feature representation is supposed to be universal, so it can
generalize well on the unseen domains. Extensive experiments on various
datasets demonstrate that FedADG has better performance than most of the
previous solutions even if they have an additional advantage that allows
centralized data access. To support study reproducibility, the project codes
are available in https://github.com/wzml/FedADG
Related papers
- FISC: Federated Domain Generalization via Interpolative Style Transfer and Contrastive Learning [5.584498171854557]
Federated Learning (FL) shows promise in preserving privacy and enabling collaborative learning.
We introduce FISC, a novel FL domain generalization paradigm that handles more complex domain distributions across clients.
Our method achieves accuracy improvements ranging from 3.64% to 57.22% on unseen domains.
arXiv Detail & Related papers (2024-10-30T00:50:23Z) - FACT: Federated Adversarial Cross Training [0.0]
Federated Adrial Cross Training (FACT) uses implicit domain differences between source clients to identify domain shifts in the target domain.
We empirically show that FACT outperforms state-of-the-art federated, non-federated and source-free domain adaptation models.
arXiv Detail & Related papers (2023-06-01T12:25:43Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Learning Across Domains and Devices: Style-Driven Source-Free Domain
Adaptation in Clustered Federated Learning [32.098954477227046]
We propose a novel task in which the clients' data is unlabeled and the server accesses a source labeled dataset for pre-training only.
Our experiments show that our algorithm is able to efficiently tackle the new task outperforming existing approaches.
arXiv Detail & Related papers (2022-10-05T15:23:52Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Cluster-driven Graph Federated Learning over Multiple Domains [25.51716405561116]
Graph Federated Learning (FL) deals with learning a central model (i.e. the server) in privacy-constrained scenarios.
Here we propose a novel Cluster-driven Graph Federated Learning (FedCG)
arXiv Detail & Related papers (2021-04-29T19:31:19Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Domain-Adaptive Few-Shot Learning [124.51420562201407]
We propose a novel domain-adversarial network (DAPN) model for domain-adaptive few-shot learning.
Our solution is to explicitly enhance the source/target per-class separation before domain-adaptive feature embedding learning.
arXiv Detail & Related papers (2020-03-19T08:31:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.