MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation
- URL: http://arxiv.org/abs/2303.15826v1
- Date: Tue, 28 Mar 2023 08:55:00 GMT
- Title: MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation
- Authors: Ziyuan Zhao, Kaixin Xu, Huai Zhe Yeo, Xulei Yang, and Cuntai Guan
- Abstract summary: unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
- Score: 11.100048696665496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain shift has been a long-standing issue for medical image segmentation.
Recently, unsupervised domain adaptation (UDA) methods have achieved promising
cross-modality segmentation performance by distilling knowledge from a
label-rich source domain to a target domain without labels. In this work, we
propose a multi-scale self-ensembling based UDA framework for automatic
segmentation of two key brain structures i.e., Vestibular Schwannoma (VS) and
Cochlea on high-resolution T2 images. First, a segmentation-enhanced
contrastive unpaired image translation module is designed for image-level
domain adaptation from source T1 to target T2. Next, multi-scale deep
supervision and consistency regularization are introduced to a mean teacher
network for self-ensemble learning to further close the domain gap.
Furthermore, self-training and intensity augmentation techniques are utilized
to mitigate label scarcity and boost cross-modality segmentation performance.
Our method demonstrates promising segmentation performance with a mean Dice
score of 83.8% and 81.4% and an average asymmetric surface distance (ASSD) of
0.55 mm and 0.26 mm for the VS and Cochlea, respectively in the validation
phase of the crossMoDA 2022 challenge.
Related papers
- A 3D Multi-Style Cross-Modality Segmentation Framework for Segmenting
Vestibular Schwannoma and Cochlea [2.2209333405427585]
The crossMoDA2023 challenge aims to segment the vestibular schwannoma and cochlea regions of unlabeled hrT2 scans by leveraging labeled ceT1 scans.
We propose a 3D multi-style cross-modality segmentation framework for the challenge, including the multi-style translation and self-training segmentation phases.
Our method produces promising results and achieves the mean DSC values of 72.78% and 80.64% on the crossMoDA2023 validation dataset.
arXiv Detail & Related papers (2023-11-20T07:29:33Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic
Segmentation [55.633859439375044]
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work.
Key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly.
This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation.
arXiv Detail & Related papers (2023-01-03T15:19:48Z) - Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma
Segmentation and Koos Grade Prediction based on Semi-Supervised Contrastive
Learning [1.5953825926551457]
unsupervised domain adaptation framework for cross-modality vestibular schwannoma (VS) and cochlea segmentation and Koos grade prediction.
nnU-Net model is utilized for VS and cochlea segmentation, while a semi-supervised contrastive learning pre-train approach is employed to improve the model performance.
Our method received rank 4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with Macro-Average Mean Square Error of 0.3941.
arXiv Detail & Related papers (2022-10-09T13:12:20Z) - Enhancing Data Diversity for Self-training Based Unsupervised
Cross-modality Vestibular Schwannoma and Cochlea Segmentation [7.327638441664658]
We present an approach for VS and cochlea segmentation in an unsupervised domain adaptation setting.
We first develop a cross-site cross-modality unpaired image translation strategy to enrich the diversity of the synthesized data.
Then, we devise a rule-based offline augmentation technique to further minimize the domain gap.
Lastly, we adopt a self-configuring segmentation framework empowered by self-training to obtain the final results.
arXiv Detail & Related papers (2022-09-23T22:26:51Z) - CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation
techniques for Vestibular Schwnannoma and Cochlea Segmentation [43.372468317829004]
Domain Adaptation (DA) has recently raised strong interests in the medical imaging community.
To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised.
CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA.
arXiv Detail & Related papers (2022-01-08T14:00:34Z) - Unsupervised Domain Adaptation in Semantic Segmentation Based on Pixel
Alignment and Self-Training [13.63879014979211]
Pixel alignment transfers ceT1 scans to hrT2 modality, helping to reduce domain shift in the training segmentation model.
Self-training adapts the decision boundary of the segmentation network to fit the distribution of hrT2 scans.
arXiv Detail & Related papers (2021-09-29T06:56:57Z) - DSP: Dual Soft-Paste for Unsupervised Domain Adaptive Semantic
Segmentation [97.74059510314554]
Unsupervised domain adaptation (UDA) for semantic segmentation aims to adapt a segmentation model trained on the labeled source domain to the unlabeled target domain.
Existing methods try to learn domain invariant features while suffering from large domain gaps.
We propose a novel Dual Soft-Paste (DSP) method in this paper.
arXiv Detail & Related papers (2021-07-20T16:22:40Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.