Enhancing Data Diversity for Self-training Based Unsupervised
Cross-modality Vestibular Schwannoma and Cochlea Segmentation
- URL: http://arxiv.org/abs/2209.11879v1
- Date: Fri, 23 Sep 2022 22:26:51 GMT
- Title: Enhancing Data Diversity for Self-training Based Unsupervised
Cross-modality Vestibular Schwannoma and Cochlea Segmentation
- Authors: Han Liu, Yubo Fan, Benoit M. Dawant
- Abstract summary: We present an approach for VS and cochlea segmentation in an unsupervised domain adaptation setting.
We first develop a cross-site cross-modality unpaired image translation strategy to enrich the diversity of the synthesized data.
Then, we devise a rule-based offline augmentation technique to further minimize the domain gap.
Lastly, we adopt a self-configuring segmentation framework empowered by self-training to obtain the final results.
- Score: 7.327638441664658
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic segmentation of vestibular schwannoma (VS) and the cochlea from
magnetic resonance imaging (MRI) can facilitate VS treatment planning.
Unsupervised segmentation methods have shown promising results without
requiring the time-consuming and laborious manual labeling process. In this
paper, we present an approach for VS and cochlea segmentation in an
unsupervised domain adaptation setting. Specifically, we first develop a
cross-site cross-modality unpaired image translation strategy to enrich the
diversity of the synthesized data. Then, we devise a rule-based offline
augmentation technique to further minimize the domain gap. Lastly, we adopt a
self-configuring segmentation framework empowered by self-training to obtain
the final results. On the CrossMoDA 2022 validation leaderboard, our method has
achieved competitive VS and cochlea segmentation performance with mean dice
scores of 0.8178 $\pm$ 0.0803 and 0.8433 $\pm$ 0.0293, respectively.
Related papers
- MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation [11.100048696665496]
unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
arXiv Detail & Related papers (2023-03-28T08:55:00Z) - IDA: Informed Domain Adaptive Semantic Segmentation [51.12107564372869]
We propose an Domain Informed Adaptation (IDA) model, a self-training framework that mixes the data based on class-level segmentation performance.
In our IDA model, the class-level performance is tracked by an expected confidence score (ECS) and we then use a dynamic schedule to determine the mixing ratio for data in different domains.
Our proposed method is able to outperform the state-of-the-art UDA-SS method by a margin of 1.1 mIoU in the adaptation of GTA-V to Cityscapes and of 0.9 mIoU in the adaptation of SYNTHIA to City
arXiv Detail & Related papers (2023-03-05T18:16:34Z) - Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma
Segmentation and Koos Grade Prediction based on Semi-Supervised Contrastive
Learning [1.5953825926551457]
unsupervised domain adaptation framework for cross-modality vestibular schwannoma (VS) and cochlea segmentation and Koos grade prediction.
nnU-Net model is utilized for VS and cochlea segmentation, while a semi-supervised contrastive learning pre-train approach is employed to improve the model performance.
Our method received rank 4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with Macro-Average Mean Square Error of 0.3941.
arXiv Detail & Related papers (2022-10-09T13:12:20Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea
Segmentation via Semi-supervised Learning and Label Fusion [10.456308424227053]
Methods to segment the vestibular schwannoma (VS) tumors and the cochlea from magnetic resonance imaging (MRI) are critical to VS treatment planning.
In this work, we aim to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting.
Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance.
arXiv Detail & Related papers (2022-01-25T22:01:04Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Cross-Modality Domain Adaptation for Vestibular Schwannoma and Cochlea
Segmentation [5.701095097774121]
We propose to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting.
Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance.
Our results on the challenge validation leaderboard showed that our unsupervised method has achieved promising VS and cochlea segmentation performance with mean dice score of 0.8261 $pm$ 0.0416; The mean dice value for the tumor is 0.8302 $pm$ 0.0772.
arXiv Detail & Related papers (2021-09-13T19:24:15Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Unsupervised Intra-domain Adaptation for Semantic Segmentation through
Self-Supervision [73.76277367528657]
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation.
To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models.
We propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together.
arXiv Detail & Related papers (2020-04-16T15:24:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.