Multi-Source Domain Adaptation Based on Federated Knowledge Alignment
- URL: http://arxiv.org/abs/2203.11635v1
- Date: Tue, 22 Mar 2022 11:42:25 GMT
- Title: Multi-Source Domain Adaptation Based on Federated Knowledge Alignment
- Authors: Yuwei Sun, Ng Chong, Ochiai Hideya
- Abstract summary: Federated Learning (FL) facilitates distributed model learning to protect users' privacy.
We propose Federated Knowledge Alignment (FedKA) that aligns features from different clients and those of the target task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) facilitates distributed model learning to protect
users' privacy. In the absence of labels for a new user's data, the knowledge
transfer in FL allows a learned global model to adapt to the new samples
quickly. The multi-source domain adaptation in FL aims to improve the model's
generality in a target domain by learning domain-invariant features from
different clients. In this paper, we propose Federated Knowledge Alignment
(FedKA) that aligns features from different clients and those of the target
task. We identify two types of negative transfer arising in multi-source domain
adaptation of FL and demonstrate how FedKA can alleviate such negative
transfers with the help of a global features disentangler enhanced by embedding
matching. To further facilitate representation learning of the target task, we
devise a federated voting mechanism to provide labels for samples from the
target domain via a consensus from querying local models and fine-tune the
global model with these labeled samples. Extensive experiments, including an
ablation study, on an image classification task of Digit-Five and a text
sentiment classification task of Amazon Review, show that FedKA could be
augmented to existing FL algorithms to improve the generality of the learned
model for tackling a new task.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning [27.782676760198697]
Federated Learning (FL) has emerged as a pivotal framework for the development of effective global models.
A key challenge in FL is client drift, where data heterogeneity impedes the aggregation of scattered knowledge.
We introduce a novel algorithm named FedDr+, which empowers local model alignment using dot-regression loss.
arXiv Detail & Related papers (2024-06-04T14:34:13Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Adaptive Global-Local Representation Learning and Selection for
Cross-Domain Facial Expression Recognition [54.334773598942775]
Domain shift poses a significant challenge in Cross-Domain Facial Expression Recognition (CD-FER)
We propose an Adaptive Global-Local Representation Learning and Selection framework.
arXiv Detail & Related papers (2024-01-20T02:21:41Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - Personalized Federated Learning with Local Attention [5.018560254008613]
Federated Learning (FL) aims to learn a single global model that enables the central server to help the model training in local clients without accessing their local data.
Key challenge of FL is the heterogeneous label distribution and feature shift, which could lead to significant performance degradation of the learned models.
We propose a simple yet effective algorithm, namely textbfpersonalized textbfFederated learning with textbfLocal textbfAttention (pFedLA)
Two modules are proposed in pFedLA, i.e., the personalized
arXiv Detail & Related papers (2023-04-02T20:10:32Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.