Labeled-to-Unlabeled Distribution Alignment for Partially-Supervised Multi-Organ Medical Image Segmentation
- URL: http://arxiv.org/abs/2409.03228v1
- Date: Thu, 5 Sep 2024 03:55:37 GMT
- Title: Labeled-to-Unlabeled Distribution Alignment for Partially-Supervised Multi-Organ Medical Image Segmentation
- Authors: Xixi Jiang, Dong Zhang, Xiang Li, Kangyi Liu, Kwang-Ting Cheng, Xin Yang,
- Abstract summary: Partially-supervised multi-organ medical image segmentation aims to develop a unified semantic segmentation model.
We propose a labeled-to-unlabeled distribution alignment framework that aligns feature distributions and enhances discriminative capability.
Our method outperforms the state-of-the-art partially-supervised methods by a considerable margin.
- Score: 30.953837550398884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partially-supervised multi-organ medical image segmentation aims to develop a unified semantic segmentation model by utilizing multiple partially-labeled datasets, with each dataset providing labels for a single class of organs. However, the limited availability of labeled foreground organs and the absence of supervision to distinguish unlabeled foreground organs from the background pose a significant challenge, which leads to a distribution mismatch between labeled and unlabeled pixels. Although existing pseudo-labeling methods can be employed to learn from both labeled and unlabeled pixels, they are prone to performance degradation in this task, as they rely on the assumption that labeled and unlabeled pixels have the same distribution. In this paper, to address the problem of distribution mismatch, we propose a labeled-to-unlabeled distribution alignment (LTUDA) framework that aligns feature distributions and enhances discriminative capability. Specifically, we introduce a cross-set data augmentation strategy, which performs region-level mixing between labeled and unlabeled organs to reduce distribution discrepancy and enrich the training set. Besides, we propose a prototype-based distribution alignment method that implicitly reduces intra-class variation and increases the separation between the unlabeled foreground and background. This can be achieved by encouraging consistency between the outputs of two prototype classifiers and a linear classifier. Extensive experimental results on the AbdomenCT-1K dataset and a union of four benchmark datasets (including LiTS, MSD-Spleen, KiTS, and NIH82) demonstrate that our method outperforms the state-of-the-art partially-supervised methods by a considerable margin, and even surpasses the fully-supervised methods. The source code is publicly available at https://github.com/xjiangmed/LTUDA.
Related papers
- Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - GuidedNet: Semi-Supervised Multi-Organ Segmentation via Labeled Data Guide Unlabeled Data [4.775846640214768]
Semi-supervised multi-organ medical image segmentation aids physicians in improving disease diagnosis and treatment planning.
A key concept is that voxel features from labeled and unlabeled data close each other in the feature space more likely to belong to the same class.
We introduce a Knowledge Transfer Cross Pseudo-label Supervision (KT-CPS) strategy, which leverages the prior knowledge obtained from the labeled data to guide the training of the unlabeled data.
arXiv Detail & Related papers (2024-08-09T07:46:01Z) - Inaccurate Label Distribution Learning with Dependency Noise [52.08553913094809]
We introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning.
We show that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods.
arXiv Detail & Related papers (2024-05-26T07:58:07Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Label distribution learning via label correlation grid [9.340734188957727]
We propose a textbfLabel textbfCorrelation textbfGrid (LCG) to model the uncertainty of label relationships.
Our network learns the LCG to accurately estimate the label distribution for each instance.
arXiv Detail & Related papers (2022-10-15T03:58:15Z) - Label Distribution Learning via Implicit Distribution Representation [12.402054374952485]
In this paper, we introduce the implicit distribution in the label distribution learning framework to characterize the uncertainty of each label value.
Specifically, we use deep implicit representation learning to construct a label distribution matrix with Gaussian prior constraints.
Each row component of the label distribution matrix is transformed into a standard label distribution form by using the self-attention algorithm.
arXiv Detail & Related papers (2022-09-28T04:13:53Z) - Mutual- and Self- Prototype Alignment for Semi-supervised Medical Image
Segmentation [5.426994893258762]
We propose a mutual- and self- prototype alignment (MSPA) framework to better utilize the unlabeled data.
In specific, mutual-prototype alignment enhances the information interaction between labeled and unlabeled data.
Our method also outperforms seven state-of-the-art semi-supervised segmentation methods on all three datasets.
arXiv Detail & Related papers (2022-06-03T02:59:22Z) - Distribution-Aware Semantics-Oriented Pseudo-label for Imbalanced
Semi-Supervised Learning [80.05441565830726]
This paper addresses imbalanced semi-supervised learning, where heavily biased pseudo-labels can harm the model performance.
We propose a general pseudo-labeling framework to address the bias motivated by this observation.
We term the novel pseudo-labeling framework for imbalanced SSL as Distribution-Aware Semantics-Oriented (DASO) Pseudo-label.
arXiv Detail & Related papers (2021-06-10T11:58:25Z) - Instance-Aware Graph Convolutional Network for Multi-Label
Classification [55.131166957803345]
Graph convolutional neural network (GCN) has effectively boosted the multi-label image recognition task.
We propose an instance-aware graph convolutional neural network (IA-GCN) framework for multi-label classification.
arXiv Detail & Related papers (2020-08-19T12:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.