Federated Active Learning for Target Domain Generalisation
- URL: http://arxiv.org/abs/2312.02247v1
- Date: Mon, 4 Dec 2023 14:50:23 GMT
- Title: Federated Active Learning for Target Domain Generalisation
- Authors: Razvan Caramalau, Binod Bhattarai, Danail Stoyanov
- Abstract summary: We introduce FEDALV, composed of Active Learning (AL) and Federated Domain Generalisation (FDG)
FDG enables generalisation of an image classification model trained from limited source domain client's data without sharing images to an unseen target domain.
FedaLV manages to obtain the performance of the full training target accuracy while sampling as little as 5% of the source client's data.
- Score: 20.582521330618768
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce Active Learning framework in Federated Learning
for Target Domain Generalisation, harnessing the strength from both learning
paradigms. Our framework, FEDALV, composed of Active Learning (AL) and
Federated Domain Generalisation (FDG), enables generalisation of an image
classification model trained from limited source domain client's data without
sharing images to an unseen target domain. To this end, our FDG, FEDA, consists
of two optimisation updates during training, one at the client and another at
the server level. For the client, the introduced losses aim to reduce feature
complexity and condition alignment, while in the server, the regularisation
limits free energy biases between source and target obtained by the global
model. The remaining component of FEDAL is AL with variable budgets, which
queries the server to retrieve and sample the most informative local data for
the targeted client. We performed multiple experiments on FDG w/ and w/o AL and
compared with both conventional FDG baselines and Federated Active Learning
baselines. Our extensive quantitative experiments demonstrate the superiority
of our method in accuracy and efficiency compared to the multiple contemporary
methods. FEDALV manages to obtain the performance of the full training target
accuracy while sampling as little as 5% of the source client's data.
Related papers
- Feature Diversification and Adaptation for Federated Domain Generalization [27.646565383214227]
In real-world applications, local clients often operate within their limited domains, leading to a domain shift' across clients.
We introduce the concept of federated feature diversification, which helps local models learn client-invariant representations while preserving privacy.
Our resultant global model shows robust performance on unseen test domain data.
arXiv Detail & Related papers (2024-07-11T07:45:10Z) - SFedCA: Credit Assignment-Based Active Client Selection Strategy for Spiking Federated Learning [15.256986486372407]
Spiking federated learning allows resource-constrained devices to train collaboratively at low power consumption without exchanging local data.
Existing spiking federated learning methods employ a random selection approach for client aggregation, assuming unbiased client participation.
We propose a credit assignment-based active client selection strategy, the SFedCA, to judiciously aggregate clients that contribute to the global sample distribution balance.
arXiv Detail & Related papers (2024-06-18T01:56:22Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Divide and Adapt: Active Domain Adaptation via Customized Learning [56.79144758380419]
We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instances into four categories with stratified transferable properties.
With a novel data subdivision protocol based on uncertainty and domainness, DiaNA can accurately recognize the most gainful samples.
Thanks to the "divideand-adapt" spirit, DiaNA can handle data with large variations of domain gap.
arXiv Detail & Related papers (2023-07-21T14:37:17Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - FACT: Federated Adversarial Cross Training [0.0]
Federated Adrial Cross Training (FACT) uses implicit domain differences between source clients to identify domain shifts in the target domain.
We empirically show that FACT outperforms state-of-the-art federated, non-federated and source-free domain adaptation models.
arXiv Detail & Related papers (2023-06-01T12:25:43Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - FLIS: Clustered Federated Learning via Inference Similarity for Non-IID
Data Distribution [7.924081556869144]
We present a new algorithm, FLIS, which groups the clients population in clusters with jointly trainable data distributions.
We present experimental results to demonstrate the benefits of FLIS over the state-of-the-art benchmarks on CIFAR-100/10, SVHN, and FMNIST datasets.
arXiv Detail & Related papers (2022-08-20T22:10:48Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.