Exploiting Unlabeled Structures through Task Consistency Training for Versatile Medical Image Segmentation
- URL: http://arxiv.org/abs/2509.04732v1
- Date: Fri, 05 Sep 2025 01:04:32 GMT
- Title: Exploiting Unlabeled Structures through Task Consistency Training for Versatile Medical Image Segmentation
- Authors: Shengqian Zhu, Jiafei Wu, Xiaogang Xu, Chengrong Yu, Ying Song, Zhang Yi, Guangjun Li, Junjie Hu,
- Abstract summary: We introduce a Task Consistency Training (TCT) framework to address class imbalance without requiring extra models.<n>To avoid error propagation from low-consistency, potentially noisy data, we propose a filtering strategy to exclude such data.<n>Experiments on eight abdominal datasets from diverse clinical sites demonstrate our approach's effectiveness.
- Score: 24.25178585285867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Versatile medical image segmentation (VMIS) targets the segmentation of multiple classes, while obtaining full annotations for all classes is often impractical due to the time and labor required. Leveraging partially labeled datasets (PLDs) presents a promising alternative; however, current VMIS approaches face significant class imbalance due to the unequal category distribution in PLDs. Existing methods attempt to address this by generating pseudo-full labels. Nevertheless, these typically require additional models and often result in potential performance degradation from label noise. In this work, we introduce a Task Consistency Training (TCT) framework to address class imbalance without requiring extra models. TCT includes a backbone network with a main segmentation head (MSH) for multi-channel predictions and multiple auxiliary task heads (ATHs) for task-specific predictions. By enforcing a consistency constraint between the MSH and ATH predictions, TCT effectively utilizes unlabeled anatomical structures. To avoid error propagation from low-consistency, potentially noisy data, we propose a filtering strategy to exclude such data. Additionally, we introduce a unified auxiliary uncertainty-weighted loss (UAUWL) to mitigate segmentation quality declines caused by the dominance of specific tasks. Extensive experiments on eight abdominal datasets from diverse clinical sites demonstrate our approach's effectiveness.
Related papers
- Partially Supervised Unpaired Multi-Modal Learning for Label-Efficient Medical Image Segmentation [53.723234136550055]
We term the new learning paradigm as Partially Supervised Unpaired Multi-Modal Learning (PSUMML)<n>We propose a novel Decomposed partial class adaptation with snapshot Ensembled Self-Training (DEST) framework for it.<n>Our framework consists of a compact segmentation network with modality specific normalization layers for learning with partially labeled unpaired multi-modal data.
arXiv Detail & Related papers (2025-03-07T07:22:42Z) - PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation [35.34932609930401]
This work proposes the PULASki method as a computationally efficient generative tool for biomedical image segmentation.<n>It captures variability in expert annotations, even in small datasets.<n>Our experiments are also the first to present a comparative study of the computationally feasible segmentation of complex geometries using 3D patches and the traditional use of 2D slices.
arXiv Detail & Related papers (2023-12-25T10:31:22Z) - Cross-head mutual Mean-Teaching for semi-supervised medical image
segmentation [6.738522094694818]
Semi-supervised medical image segmentation (SSMIS) has witnessed substantial advancements by leveraging limited labeled data and abundant unlabeled data.
Existing state-of-the-art (SOTA) methods encounter challenges in accurately predicting labels for the unlabeled data.
We propose a novel Cross-head mutual mean-teaching Network (CMMT-Net) incorporated strong-weak data augmentation.
arXiv Detail & Related papers (2023-10-08T09:13:04Z) - Multi-Scale Cross Contrastive Learning for Semi-Supervised Medical Image
Segmentation [14.536384387956527]
We develop a novel Multi-Scale Cross Supervised Contrastive Learning framework to segment structures in medical images.
Our approach contrasts multi-scale features based on ground-truth and cross-predicted labels, in order to extract robust feature representations.
It outperforms state-of-the-art semi-supervised methods by more than 3.0% in Dice.
arXiv Detail & Related papers (2023-06-25T16:55:32Z) - Cross-supervised Dual Classifiers for Semi-supervised Medical Image
Segmentation [10.18427897663732]
Semi-supervised medical image segmentation offers a promising solution for large-scale medical image analysis.
This paper proposes a cross-supervised learning framework based on dual classifiers (DC-Net)
Experiments on LA and Pancreas-CT dataset illustrate that DC-Net outperforms other state-of-the-art methods for semi-supervised segmentation.
arXiv Detail & Related papers (2023-05-25T16:23:39Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - PoissonSeg: Semi-Supervised Few-Shot Medical Image Segmentation via
Poisson Learning [0.505645669728935]
Few-shot Semantic (FSS) is a promising strategy for breaking the deadlock in deep learning.
FSS model still requires sufficient pixel-level annotated classes for training to avoid overfitting.
We propose a novel semi-supervised FSS framework for medical image segmentation.
arXiv Detail & Related papers (2021-08-26T10:24:04Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z) - Brain Stroke Lesion Segmentation Using Consistent Perception Generative
Adversarial Network [22.444373004248217]
Consistent PerceptionGenerative Adversarial Network (CPGAN) is proposed for semi-supervised stroke lesion segmentation.
A similarity connection module (SCM) is designed to capture the information of multi-scale features.
An assistant network is constructed to encourage the discriminator to learn meaningful feature representations.
arXiv Detail & Related papers (2020-08-30T07:42:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.