Dual-Task Mutual Learning for Semi-Supervised Medical Image Segmentation
- URL: http://arxiv.org/abs/2103.04708v1
- Date: Mon, 8 Mar 2021 12:38:23 GMT
- Title: Dual-Task Mutual Learning for Semi-Supervised Medical Image Segmentation
- Authors: Yichi Zhang, Jicong Zhang
- Abstract summary: We propose a novel dual-task mutual learning framework for semi-supervised medical image segmentation.
Our framework can be formulated as an integration of two individual segmentation networks based on two tasks.
By jointly learning the segmentation probability maps and signed distance maps of targets, our framework can enforce the geometric shape constraint and learn more reliable information.
- Score: 12.940103904327655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of deep learning methods in medical image segmentation tasks
usually requires a large amount of labeled data. However, obtaining reliable
annotations is expensive and time-consuming. Semi-supervised learning has
attracted much attention in medical image segmentation by taking the advantage
of unlabeled data which is much easier to acquire. In this paper, we propose a
novel dual-task mutual learning framework for semi-supervised medical image
segmentation. Our framework can be formulated as an integration of two
individual segmentation networks based on two tasks: learning region-based
shape constraint and learning boundary-based surface mismatch. Different from
the one-way transfer between teacher and student networks, an ensemble of
dual-task students can learn collaboratively and implicitly explore useful
knowledge from each other during the training process. By jointly learning the
segmentation probability maps and signed distance maps of targets, our
framework can enforce the geometric shape constraint and learn more reliable
information. Experimental results demonstrate that our method achieves
performance gains by leveraging unlabeled data and outperforms the
state-of-the-art semi-supervised segmentation methods.
Related papers
- Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Histogram of Oriented Gradients Meet Deep Learning: A Novel Multi-task
Deep Network for Medical Image Semantic Segmentation [18.066680957993494]
We present our novel deep multi-task learning method for medical image segmentation.
We generate the pseudo-labels of an auxiliary task in an unsupervised manner.
Our method consistently improves the performance compared to the counter-part method.
arXiv Detail & Related papers (2022-04-02T23:50:29Z) - Boundary-aware Information Maximization for Self-supervised Medical
Image Segmentation [13.828282295918628]
We propose a novel unsupervised pre-training framework that avoids the drawback of contrastive learning.
Experimental results on two benchmark medical segmentation datasets reveal our method's effectiveness when few annotated images are available.
arXiv Detail & Related papers (2022-02-04T20:18:00Z) - Semi-supervised Contrastive Learning for Label-efficient Medical Image
Segmentation [11.935891325600952]
We propose a supervised local contrastive loss that leverages limited pixel-wise annotation to force pixels with the same label to gather around in the embedding space.
With different amounts of labeled data, our methods consistently outperform the state-of-the-art contrast-based methods and other semi-supervised learning techniques.
arXiv Detail & Related papers (2021-09-15T16:23:48Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Semi-supervised Medical Image Segmentation through Dual-task Consistency [18.18484640332254]
We propose a novel dual-task deep network that jointly predicts a pixel-wise segmentation map and a geometry-aware level set representation of the target.
Our method can largely improve the performance by incorporating the unlabeled data.
Our framework outperforms the state-of-the-art semi-supervised medical image segmentation methods.
arXiv Detail & Related papers (2020-09-09T17:49:21Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.