Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble
- URL: http://arxiv.org/abs/2109.12169v1
- Date: Fri, 24 Sep 2021 20:10:05 GMT
- Title: Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble
- Authors: Hao Li, Dewei Hu, Qibang Zhu, Kathleen E. Larson, Huahong Zhang, and
Ipek Oguz
- Abstract summary: In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
- Score: 4.942327155020771
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic resonance images (MRIs) are widely used to quantify vestibular
schwannoma and the cochlea. Recently, deep learning methods have shown
state-of-the-art performance for segmenting these structures. However, training
segmentation models may require manual labels in target domain, which is
expensive and time-consuming. To overcome this problem, domain adaptation is an
effective way to leverage information from source domain to obtain accurate
segmentations without requiring manual labels in target domain. In this paper,
we propose an unsupervised learning framework to segment the VS and cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w)
MRIs and its labels, and produces segmentations for T2-weighted MRIs without
any labels in the target domain. We first applied a generator to achieve
image-to-image translation. Next, we ensemble outputs from an ensemble of
different models to obtain final segmentations. To cope with MRIs from
different sites/scanners, we applied various 'online' augmentations during
training to better capture the geometric variability and the variability in
image appearance and quality. Our method is easy to build and produces
promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and
cochlea respectively in the validation set.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation [11.100048696665496]
unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
arXiv Detail & Related papers (2023-03-28T08:55:00Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Semi-Supervised Domain Generalization for Cardiac Magnetic Resonance
Image Segmentation with High Quality Pseudo Labels [8.283424744148258]
We present a domain generalization method for semi-supervised medical segmentation.
Our main goal is to improve the quality of pseudo labels under extreme MRI Analysis with various domains.
Our approach consistently generates accurate segmentation results of cardiac magnetic resonance images with different respiratory motions.
arXiv Detail & Related papers (2022-09-30T12:57:41Z) - COSMOS: Cross-Modality Unsupervised Domain Adaptation for 3D Medical
Image Segmentation based on Target-aware Domain Translation and Iterative
Self-Training [6.513315990156929]
We propose a self-training based unsupervised domain adaptation framework for 3D medical image segmentation named COSMOS.
Our target-aware contrast conversion network translates source domain annotated T1 MRI to pseudo T2 MRI to enable segmentation training on target domain.
COSMOS won the 1textsuperscriptst place in the Cross-Modality Domain Adaptation (crossMoDA) challenge held in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021)
arXiv Detail & Related papers (2022-03-30T18:00:07Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - ICMSC: Intra- and Cross-modality Semantic Consistency for Unsupervised
Domain Adaptation on Hip Joint Bone Segmentation [1.4148874598036136]
We propose intra- and cross-modality semantic consistency (ICMSC) for UDA.
Our proposed method achieves an average DICE of 81.61% on the acetabulum and 88.16% on the proximal femur.
Without UDA, a model trained on CT for hip joint bone segmentation is non-transferable to MRI and has almost zero-DICE segmentation.
arXiv Detail & Related papers (2020-12-23T09:58:38Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.