FUNAvg: Federated Uncertainty Weighted Averaging for Datasets with Diverse Labels
- URL: http://arxiv.org/abs/2407.07488v1
- Date: Wed, 10 Jul 2024 09:23:55 GMT
- Title: FUNAvg: Federated Uncertainty Weighted Averaging for Datasets with Diverse Labels
- Authors: Malte Tölle, Fernando Navarro, Sebastian Eble, Ivo Wolf, Bjoern Menze, Sandy Engelhardt,
- Abstract summary: We propose to learn a joint backbone in a federated manner.
We observe that the different segmentation heads although only trained on the individual client's labels also learn information about the other labels not present at the respective site.
With our method, which we refer to as FUNAvg, we are even on-par with the models trained and tested on the same dataset on average.
- Score: 37.20677220716839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is one popular paradigm to train a joint model in a distributed, privacy-preserving environment. But partial annotations pose an obstacle meaning that categories of labels are heterogeneous over clients. We propose to learn a joint backbone in a federated manner, while each site receives its own multi-label segmentation head. By using Bayesian techniques we observe that the different segmentation heads although only trained on the individual client's labels also learn information about the other labels not present at the respective site. This information is encoded in their predictive uncertainty. To obtain a final prediction we leverage this uncertainty and perform a weighted averaging of the ensemble of distributed segmentation heads, which allows us to segment "locally unknown" structures. With our method, which we refer to as FUNAvg, we are even on-par with the models trained and tested on the same dataset on average. The code is publicly available at https://github.com/Cardio-AI/FUNAvg.
Related papers
- Federated Learning with Label-Masking Distillation [33.80340338038264]
Federated learning provides a privacy-preserving manner to collaboratively train models on data distributed over multiple local clients.
Due to the different user behavior of the client, label distributions between different clients are significantly different.
We propose a label-masking distillation approach termed FedLMD to facilitate federated learning via perceiving the various label distributions of each client.
arXiv Detail & Related papers (2024-09-20T00:46:04Z) - Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - Rethinking Semi-Supervised Federated Learning: How to co-train
fully-labeled and fully-unlabeled client imaging data [6.322831694506287]
Isolated Federated Learning (IsoFed) is a learning scheme specifically designed for semi-supervised federated learning (SSFL)
We propose a novel learning scheme specifically designed for SSFL that circumvents the problem by avoiding simple averaging of supervised and semi-supervised models together.
In particular, our training approach consists of two parts - (a) isolated aggregation of labeled and unlabeled client models, and (b) local self-supervised pretraining of isolated global models in all clients.
arXiv Detail & Related papers (2023-10-28T20:41:41Z) - JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - FedIL: Federated Incremental Learning from Decentralized Unlabeled Data
with Convergence Analysis [23.70951896315126]
This work considers the server with a small labeled dataset and intends to use unlabeled data in multiple clients for semi-supervised learning.
We propose a new framework with a generalized model, Federated Incremental Learning (FedIL), to address the problem of how to utilize labeled data in the server and unlabeled data in clients separately.
arXiv Detail & Related papers (2023-02-23T07:12:12Z) - Navigating Alignment for Non-identical Client Class Sets: A Label
Name-Anchored Federated Learning Framework [26.902679793955972]
FedAlign is a novel framework to align latent spaces across clients from both label and data perspectives.
From a label perspective, we leverage the expressive natural language class names as a common ground for label encoders to anchor class representations.
From a data perspective, we regard the global class representations as anchors and leverage the data points that are close/far enough to the anchors of locally-unaware classes to align the data encoders across clients.
arXiv Detail & Related papers (2023-01-01T23:17:30Z) - SemiFed: Semi-supervised Federated Learning with Consistency and
Pseudo-Labeling [14.737638416823772]
Federated learning enables multiple clients, such as mobile phones and organizations, to collaboratively learn a shared model for prediction.
In this work, we focus on a new scenario for cross-silo federated learning, where data samples of each client are partially labeled.
We propose a new framework dubbed SemiFed that unifies two dominant approaches for semi-supervised learning: consistency regularization and pseudo-labeling.
arXiv Detail & Related papers (2021-08-21T01:14:27Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.