Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients
- URL: http://arxiv.org/abs/2204.03304v1
- Date: Thu, 7 Apr 2022 09:12:00 GMT
- Title: Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients
- Authors: Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama
- Abstract summary: Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
- Score: 98.22390453672499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised federated learning (FL) enables multiple clients to share the
trained model without sharing their labeled data. However, potential clients
might even be reluctant to label their own data, which could limit the
applicability of FL in practice. In this paper, we show the possibility of
unsupervised FL whose model is still a classifier for predicting class labels,
if the class-prior probabilities are shifted while the class-conditional
distributions are shared among the unlabeled data owned by the clients. We
propose federation of unsupervised learning (FedUL), where the unlabeled data
are transformed into surrogate labeled data for each of the clients, a modified
model is trained by supervised FL, and the wanted model is recovered from the
modified model. FedUL is a very general solution to unsupervised FL: it is
compatible with many supervised FL methods, and the recovery of the wanted
model can be theoretically guaranteed as if the data have been labeled.
Experiments on benchmark and real-world datasets demonstrate the effectiveness
of FedUL. Code is available at https://github.com/lunanbit/FedUL.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Language-Guided Transformer for Federated Multi-Label Classification [32.26913287627532]
Federated Learning (FL) enables multiple users to collaboratively train a robust model in a privacy-preserving manner without sharing their private data.
Most existing approaches of FL only consider traditional single-label image classification, ignoring the impact when transferring the task to multi-label image classification.
We propose a novel FL framework of Language-Guided Transformer (FedLGT) to tackle this challenging task, which aims to exploit and transfer knowledge across different clients for learning a robust global model.
arXiv Detail & Related papers (2023-12-12T11:03:51Z) - Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - Adaptive Test-Time Personalization for Federated Learning [51.25437606915392]
We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
arXiv Detail & Related papers (2023-10-28T20:42:47Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Unbiased Training in Federated Open-world Semi-supervised
Learning [15.08153616709326]
We propose a novel Federatedopen-world Semi-Supervised Learning (FedoSSL) framework, which can solve the key challenge in distributed and open-world settings.
We adopt an uncertainty-aware suppressed loss to alleviate the biased training between locally unseen and globally unseen classes.
The proposed FedoSSL can be easily adapted to state-of-the-art FL methods, which is also validated via extensive experiments on benchmarks and real-world datasets.
arXiv Detail & Related papers (2023-05-01T11:12:37Z) - Federated Learning on Heterogeneous and Long-Tailed Data via Classifier
Re-Training with Federated Features [24.679535905451758]
Federated learning (FL) provides a privacy-preserving solution for distributed machine learning tasks.
One challenging problem that severely damages the performance of FL models is the co-occurrence of data heterogeneity and long-tail distribution.
We propose a novel privacy-preserving FL method for heterogeneous and long-tailed data via Federated Re-training with Federated Features (CReFF)
arXiv Detail & Related papers (2022-04-28T10:35:11Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - FOCUS: Dealing with Label Quality Disparity in Federated Learning [25.650278226178298]
We propose Federated Opportunistic Computing for Ubiquitous Systems (FOCUS) to address this challenge.
FOCUS quantifies the credibility of the client local data without directly observing them.
It effectively identifies clients with noisy labels and reduces their impact on the model performance.
arXiv Detail & Related papers (2020-01-29T09:31:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.