Eliminating Domain Bias for Federated Learning in Representation Space
- URL: http://arxiv.org/abs/2311.14975v1
- Date: Sat, 25 Nov 2023 09:22:34 GMT
- Title: Eliminating Domain Bias for Federated Learning in Representation Space
- Authors: Jianqing Zhang, Yang Hua, Jian Cao, Hao Wang, Tao Song, Zhengui Xue,
Ruhui Ma, and Haibing Guan
- Abstract summary: We propose a general framework Domain Bias Eliminator (DBE) for federated learning.
Our theoretical analysis reveals that DBE can promote bi-directional knowledge transfer between server and client.
The DBE-equipped FL method can outperform ten state-of-the-art personalized FL methods by a large margin.
- Score: 31.52707182599217
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, federated learning (FL) is popular for its privacy-preserving and
collaborative learning abilities. However, under statistically heterogeneous
scenarios, we observe that biased data domains on clients cause a
representation bias phenomenon and further degenerate generic representations
during local training, i.e., the representation degeneration phenomenon. To
address these issues, we propose a general framework Domain Bias Eliminator
(DBE) for FL. Our theoretical analysis reveals that DBE can promote
bi-directional knowledge transfer between server and client, as it reduces the
domain discrepancy between server and client in representation space. Besides,
extensive experiments on four datasets show that DBE can greatly improve
existing FL methods in both generalization and personalization abilities. The
DBE-equipped FL method can outperform ten state-of-the-art personalized FL
methods by a large margin. Our code is public at
https://github.com/TsingZ0/DBE.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - FedSIS: Federated Split Learning with Intermediate Representation
Sampling for Privacy-preserving Generalized Face Presentation Attack
Detection [4.1897081000881045]
Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms.
In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization.
arXiv Detail & Related papers (2023-08-20T11:49:12Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedWon: Triumphing Multi-domain Federated Learning Without Normalization [50.49210227068574]
Federated learning (FL) enhances data privacy with collaborative in-situ training on decentralized clients.
However, Federated learning (FL) encounters challenges due to non-independent and identically distributed (non-i.i.d) data.
We propose a novel method called Federated learning Without normalizations (FedWon) to address the multi-domain problem in FL.
arXiv Detail & Related papers (2023-06-09T13:18:50Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Federated Learning with Intermediate Representation Regularization [14.01585596739954]
Federated learning (FL) enables remote clients to collaboratively train a model without exposing their private data.
Previous studies accomplish this by regularizing the distance between the representations learned by the local and global models.
We introduce FedIntR, which provides a more fine-grained regularization by integrating the representations of intermediate layers into the local training process.
arXiv Detail & Related papers (2022-10-28T01:43:55Z) - FedDAR: Federated Domain-Aware Representation Learning [14.174833360938806]
Cross-silo Federated learning (FL) has become a promising tool in machine learning applications for healthcare.
We propose a novel method, FedDAR, which learns a domain shared representation and domain-wise personalized prediction heads.
arXiv Detail & Related papers (2022-09-08T19:18:59Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - Federated Learning with Domain Generalization [11.92860245410696]
Federated Learning enables a group of clients to jointly train a machine learning model with the help of a centralized server.
In practice, the model trained over multiple source domains may have poor generalization performance on unseen target domains.
We propose FedADG to equip federated learning with domain generalization capability.
arXiv Detail & Related papers (2021-11-20T01:02:36Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.