Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image
Segmentation
- URL: http://arxiv.org/abs/2311.11686v1
- Date: Mon, 20 Nov 2023 11:35:52 GMT
- Title: Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image
Segmentation
- Authors: Qingjie Zeng, Yutong Xie, Zilin Lu, Mengkang Lu, Yicheng Wu and Yong
Xia
- Abstract summary: scarcity has become a major obstacle for training powerful deep-learning models for medical image segmentation.
We introduce a textbfVersatile textbfSemi-supervised framework to exploit more unlabeled data for semi-supervised medical image segmentation.
- Score: 17.69933345468061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Annotation scarcity has become a major obstacle for training powerful
deep-learning models for medical image segmentation, restricting their
deployment in clinical scenarios. To address it, semi-supervised learning by
exploiting abundant unlabeled data is highly desirable to boost the model
training. However, most existing works still focus on limited medical tasks and
underestimate the potential of learning across diverse tasks and multiple
datasets. Therefore, in this paper, we introduce a \textbf{Ver}satile
\textbf{Semi}-supervised framework (VerSemi) to point out a new perspective
that integrates various tasks into a unified model with a broad label space, to
exploit more unlabeled data for semi-supervised medical image segmentation.
Specifically, we introduce a dynamic task-prompted design to segment various
targets from different datasets. Next, this unified model is used to identify
the foreground regions from all labeled data, to capture cross-dataset
semantics. Particularly, we create a synthetic task with a cutmix strategy to
augment foreground targets within the expanded label space. To effectively
utilize unlabeled data, we introduce a consistency constraint. This involves
aligning aggregated predictions from various tasks with those from the
synthetic task, further guiding the model in accurately segmenting foreground
regions during training. We evaluated our VerSemi model on four public
benchmarking datasets. Extensive experiments demonstrated that VerSemi can
consistently outperform the second-best method by a large margin (e.g., an
average 2.69\% Dice gain on four datasets), setting new SOTA performance for
semi-supervised medical image segmentation. The code will be released.
Related papers
- Pseudo Label-Guided Data Fusion and Output Consistency for
Semi-Supervised Medical Image Segmentation [9.93871075239635]
We propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation.
We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively.
Our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2023-11-17T06:36:43Z) - AIMS: All-Inclusive Multi-Level Segmentation [93.5041381700744]
We propose a new task, All-Inclusive Multi-Level (AIMS), which segments visual regions into three levels: part, entity, and relation.
We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation.
arXiv Detail & Related papers (2023-05-28T16:28:49Z) - MSeg: A Composite Dataset for Multi-domain Semantic Segmentation [100.17755160696939]
We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains.
We reconcile the generalization and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images.
A model trained on MSeg ranks first on the WildDash-v1 leaderboard for robust semantic segmentation, with no exposure to WildDash data during training.
arXiv Detail & Related papers (2021-12-27T16:16:35Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Uncertainty-aware multi-view co-training for semi-supervised medical
image segmentation and domain adaptation [35.33425093398756]
Unlabeled data is much easier to acquire than well-annotated data.
We propose uncertainty-aware multi-view co-training for medical image segmentation.
Our framework is capable of efficiently utilizing unlabeled data for better performance.
arXiv Detail & Related papers (2020-06-28T22:04:54Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - Semi-supervised few-shot learning for medical image segmentation [21.349705243254423]
Recent attempts to alleviate the need for large annotated datasets have developed training strategies under the few-shot learning paradigm.
We propose a novel few-shot learning framework for semantic segmentation, where unlabeled images are also made available at each episode.
We show that including unlabeled surrogate tasks in the episodic training leads to more powerful feature representations.
arXiv Detail & Related papers (2020-03-18T20:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.