Federated Learning with Domain Shift Eraser
- URL: http://arxiv.org/abs/2503.13063v1
- Date: Mon, 17 Mar 2025 11:10:31 GMT
- Title: Federated Learning with Domain Shift Eraser
- Authors: Zheng Wang, Zihui Wang, Zheng Wang, Xiaoliang Fan, Cheng Wang,
- Abstract summary: Federated learning (FL) is emerging as a promising technique for collaborative learning without local data leaving their devices.<n>We propose a novel FL framework, Federated Domain Shift Eraser (FDSE), to improve model performance.
- Score: 16.685557256108265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is emerging as a promising technique for collaborative learning without local data leaving their devices. However, clients' data originating from diverse domains may degrade model performance due to domain shifts, preventing the model from learning consistent representation space. In this paper, we propose a novel FL framework, Federated Domain Shift Eraser (FDSE), to improve model performance by differently erasing each client's domain skew and enhancing their consensus. First, we formulate the model forward passing as an iterative deskewing process that extracts and then deskews features alternatively. This is efficiently achieved by decomposing each original layer in the neural network into a Domain-agnostic Feature Extractor (DFE) and a Domain-specific Skew Eraser (DSE). Then, a regularization term is applied to promise the effectiveness of feature deskewing by pulling local statistics of DSE's outputs close to the globally consistent ones. Finally, DFE modules are fairly aggregated and broadcast to all the clients to maximize their consensus, and DSE modules are personalized for each client via similarity-aware aggregation to erase their domain skew differently. Comprehensive experiments were conducted on three datasets to confirm the advantages of our method in terms of accuracy, efficiency, and generalizability.
Related papers
- Byzantine Resilient Federated Multi-Task Representation Learning [1.6114012813668932]
We propose BR-MTRL, a Byzantine-resilient multi-task representation learning framework that handles faulty or malicious agents.
Our approach leverages representation learning through a shared neural network model, where all clients share fixed layers, except for a client-specific final layer.
arXiv Detail & Related papers (2025-03-24T23:26:28Z) - Let Synthetic Data Shine: Domain Reassembly and Soft-Fusion for Single Domain Generalization [68.41367635546183]
Single Domain Generalization aims to train models with consistent performance across diverse scenarios using data from a single source.
We propose Discriminative Domain Reassembly and Soft-Fusion (DRSF), a training framework leveraging synthetic data to improve model generalization.
arXiv Detail & Related papers (2025-03-17T18:08:03Z) - FISC: Federated Domain Generalization via Interpolative Style Transfer and Contrastive Learning [5.584498171854557]
Federated Learning (FL) shows promise in preserving privacy and enabling collaborative learning.
We introduce FISC, a novel FL domain generalization paradigm that handles more complex domain distributions across clients.
Our method achieves accuracy improvements ranging from 3.64% to 57.22% on unseen domains.
arXiv Detail & Related papers (2024-10-30T00:50:23Z) - Feature Diversification and Adaptation for Federated Domain Generalization [27.646565383214227]
In real-world applications, local clients often operate within their limited domains, leading to a domain shift' across clients.
We introduce the concept of federated feature diversification, which helps local models learn client-invariant representations while preserving privacy.
Our resultant global model shows robust performance on unseen test domain data.
arXiv Detail & Related papers (2024-07-11T07:45:10Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Model-Contrastive Federated Domain Adaptation [3.9435648520559177]
Federated domain adaptation (FDA) aims to collaboratively transfer knowledge from source clients (domains) to the related but different target client.
We propose a model-based method named FDAC, aiming to address bf Federated bf Domain bf Adaptation based on bf Contrastive learning and Vision Transformer (ViT)
To the best of our knowledge, FDAC is the first attempt to learn transferable representations by manipulating the latent architecture of ViT under the federated setting.
arXiv Detail & Related papers (2023-05-07T23:48:03Z) - Meta-DMoE: Adapting to Domain Shift by Meta-Distillation from
Mixture-of-Experts [33.21435044949033]
Most existing methods perform training on multiple source domains using a single model.
We propose a novel framework for unsupervised test-time adaptation, which is formulated as a knowledge distillation process.
arXiv Detail & Related papers (2022-10-08T02:28:10Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Cross-Domain Facial Expression Recognition: A Unified Evaluation
Benchmark and Adversarial Graph Learning [85.6386289476598]
We develop a novel adversarial graph representation adaptation (AGRA) framework for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair evaluations on several popular benchmarks and show that the proposed AGRA framework outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T15:00:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.