FedMLP: Federated Multi-Label Medical Image Classification under Task Heterogeneity
- URL: http://arxiv.org/abs/2406.18995v1
- Date: Thu, 27 Jun 2024 08:36:43 GMT
- Title: FedMLP: Federated Multi-Label Medical Image Classification under Task Heterogeneity
- Authors: Zhaobin Sun, Nannan Wu, Junjie Shi, Li Yu, Xin Yang, Kwang-Ting Cheng, Zengqiang Yan,
- Abstract summary: Cross-silo federated learning (FL) enables decentralized organizations to collaboratively train models while preserving data privacy.
We propose a two-stage method FedMLP to combat class missing from two aspects: pseudo label tagging and global knowledge learning.
Experiments on two publicly-available medical datasets validate the superiority of FedMLP against the state-of-the-art both federated semi-supervised and noisy label learning approaches.
- Score: 30.49607763632271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-silo federated learning (FL) enables decentralized organizations to collaboratively train models while preserving data privacy and has made significant progress in medical image classification. One common assumption is task homogeneity where each client has access to all classes during training. However, in clinical practice, given a multi-label classification task, constrained by the level of medical knowledge and the prevalence of diseases, each institution may diagnose only partial categories, resulting in task heterogeneity. How to pursue effective multi-label medical image classification under task heterogeneity is under-explored. In this paper, we first formulate such a realistic label missing setting in the multi-label FL domain and propose a two-stage method FedMLP to combat class missing from two aspects: pseudo label tagging and global knowledge learning. The former utilizes a warmed-up model to generate class prototypes and select samples with high confidence to supplement missing labels, while the latter uses a global model as a teacher for consistency regularization to prevent forgetting missing class knowledge. Experiments on two publicly-available medical datasets validate the superiority of FedMLP against the state-of-the-art both federated semi-supervised and noisy label learning approaches under task heterogeneity. Code is available at https://github.com/szbonaldo/FedMLP.
Related papers
- Rethinking Semi-Supervised Federated Learning: How to co-train
fully-labeled and fully-unlabeled client imaging data [6.322831694506287]
Isolated Federated Learning (IsoFed) is a learning scheme specifically designed for semi-supervised federated learning (SSFL)
We propose a novel learning scheme specifically designed for SSFL that circumvents the problem by avoiding simple averaging of supervised and semi-supervised models together.
In particular, our training approach consists of two parts - (a) isolated aggregation of labeled and unlabeled client models, and (b) local self-supervised pretraining of isolated global models in all clients.
arXiv Detail & Related papers (2023-10-28T20:41:41Z) - Scale Federated Learning for Label Set Mismatch in Medical Image
Classification [4.344828846048128]
Federated learning (FL) has been introduced to the healthcare domain as a decentralized learning paradigm.
Most previous studies have assumed that every client holds an identical label set.
We propose the framework FedLSM to solve the problem of Label Set Mismatch.
arXiv Detail & Related papers (2023-04-14T05:32:01Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Test-time Adaptation with Calibration of Medical Image Classification
Nets for Label Distribution Shift [24.988087560120366]
We propose the first method to tackle label shift for medical image classification.
Our method effectively adapt the model learned from a single training label distribution to arbitrary unknown test label distribution.
We validate our method on two important medical image classification tasks including liver fibrosis staging and COVID-19 severity prediction.
arXiv Detail & Related papers (2022-07-02T07:55:23Z) - Learning Underrepresented Classes from Decentralized Partially Labeled
Medical Images [11.500033811355062]
Using decentralized data for federated training is one promising emerging research direction for alleviating data scarcity in the medical domain.
In this paper, we consider a practical yet under-explored problem, where underrepresented classes only have few labeled instances available.
We show that standard federated learning approaches fail to learn robust multi-label classifiers with extreme class imbalance.
arXiv Detail & Related papers (2022-06-30T15:28:18Z) - ACPL: Anti-curriculum Pseudo-labelling forSemi-supervised Medical Image
Classification [22.5935068122522]
We propose a new SSL algorithm, called anti-curriculum pseudo-labelling (ACPL)
ACPL introduces novel techniques to select informative unlabelled samples, improving training balance and allowing the model to work for both multi-label and multi-class problems.
Our method outperforms previous SOTA SSL methods on both datasets.
arXiv Detail & Related papers (2021-11-25T05:31:52Z) - Federated Semi-supervised Medical Image Classification via Inter-client
Relation Matching [58.26619456972598]
Federated learning (FL) has emerged with increasing popularity to collaborate distributed medical institutions for training deep networks.
This paper studies a practical yet challenging FL problem, named textitFederated Semi-supervised Learning (FSSL)
We present a novel approach for this problem, which improves over traditional consistency regularization mechanism with a new inter-client relation matching scheme.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Unsupervised Person Re-identification via Multi-label Classification [55.65870468861157]
This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels.
Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction.
To boost the ReID model training efficiency in multi-label classification, we propose the memory-based multi-label classification loss (MMCL)
arXiv Detail & Related papers (2020-04-20T12:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.