FACT: Federated Adversarial Cross Training
- URL: http://arxiv.org/abs/2306.00607v2
- Date: Fri, 28 Jul 2023 10:57:29 GMT
- Title: FACT: Federated Adversarial Cross Training
- Authors: Stefan Schrod, Jonas Lippl, Andreas Sch\"afer, Michael Altenbuchinger
- Abstract summary: Federated Adrial Cross Training (FACT) uses implicit domain differences between source clients to identify domain shifts in the target domain.
We empirically show that FACT outperforms state-of-the-art federated, non-federated and source-free domain adaptation models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) facilitates distributed model development to
aggregate multiple confidential data sources. The information transfer among
clients can be compromised by distributional differences, i.e., by non-i.i.d.
data. A particularly challenging scenario is the federated model adaptation to
a target client without access to annotated data. We propose Federated
Adversarial Cross Training (FACT), which uses the implicit domain differences
between source clients to identify domain shifts in the target domain. In each
round of FL, FACT cross initializes a pair of source clients to generate domain
specialized representations which are then used as a direct adversary to learn
a domain invariant data representation. We empirically show that FACT
outperforms state-of-the-art federated, non-federated and source-free domain
adaptation models on three popular multi-source-single-target benchmarks, and
state-of-the-art Unsupervised Domain Adaptation (UDA) models on
single-source-single-target experiments. We further study FACT's behavior with
respect to communication restrictions and the number of participating clients.
Related papers
- FISC: Federated Domain Generalization via Interpolative Style Transfer and Contrastive Learning [5.584498171854557]
Federated Learning (FL) shows promise in preserving privacy and enabling collaborative learning.
We introduce FISC, a novel FL domain generalization paradigm that handles more complex domain distributions across clients.
Our method achieves accuracy improvements ranging from 3.64% to 57.22% on unseen domains.
arXiv Detail & Related papers (2024-10-30T00:50:23Z) - FedCCRL: Federated Domain Generalization with Cross-Client Representation Learning [4.703379311088474]
Domain Generalization (DG) aims to train models that can effectively generalize to unseen domains.
In Federated Learning (FL), where clients collaboratively train a model without directly sharing their data, most existing DG algorithms are not directly applicable to the FL setting.
We propose FedCCRL, a lightweight federated domain generalization method that significantly improves the model's generalization ability while preserving privacy.
arXiv Detail & Related papers (2024-10-15T04:44:21Z) - Feature Diversification and Adaptation for Federated Domain Generalization [27.646565383214227]
In real-world applications, local clients often operate within their limited domains, leading to a domain shift' across clients.
We introduce the concept of federated feature diversification, which helps local models learn client-invariant representations while preserving privacy.
Our resultant global model shows robust performance on unseen test domain data.
arXiv Detail & Related papers (2024-07-11T07:45:10Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.