Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea
Segmentation via Semi-supervised Learning and Label Fusion
- URL: http://arxiv.org/abs/2201.10647v1
- Date: Tue, 25 Jan 2022 22:01:04 GMT
- Title: Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea
Segmentation via Semi-supervised Learning and Label Fusion
- Authors: Han Liu, Yubo Fan, Can Cui, Dingjie Su, Andrew McNeil, Benoit M.
Dawant
- Abstract summary: Methods to segment the vestibular schwannoma (VS) tumors and the cochlea from magnetic resonance imaging (MRI) are critical to VS treatment planning.
In this work, we aim to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting.
Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance.
- Score: 10.456308424227053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic methods to segment the vestibular schwannoma (VS) tumors and the
cochlea from magnetic resonance imaging (MRI) are critical to VS treatment
planning. Although supervised methods have achieved satisfactory performance in
VS segmentation, they require full annotations by experts, which is laborious
and time-consuming. In this work, we aim to tackle the VS and cochlea
segmentation problem in an unsupervised domain adaptation setting. Our proposed
method leverages both the image-level domain alignment to minimize the domain
divergence and semi-supervised training to further boost the performance.
Furthermore, we propose to fuse the labels predicted from multiple models via
noisy label correction. In the MICCAI 2021 crossMoDA challenge, our results on
the final evaluation leaderboard showed that our proposed method has achieved
promising segmentation performance with mean dice score of 79.9% and 82.5% and
ASSD of 1.29 mm and 0.18 mm for VS tumor and cochlea, respectively. The cochlea
ASSD achieved by our method has outperformed all other competing methods as
well as the supervised nnU-Net.
Related papers
- MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation [11.100048696665496]
unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
arXiv Detail & Related papers (2023-03-28T08:55:00Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - ADPS: Asymmetric Distillation Post-Segmentation for Image Anomaly
Detection [75.68023968735523]
Knowledge Distillation-based Anomaly Detection (KDAD) methods rely on the teacher-student paradigm to detect and segment anomalous regions.
We propose an innovative approach called Asymmetric Distillation Post-Segmentation (ADPS)
Our ADPS employs an asymmetric distillation paradigm that takes distinct forms of the same image as the input of the teacher-student networks.
We show that ADPS significantly improves Average Precision (AP) metric by 9% and 20% on the MVTec AD and KolektorSDD2 datasets.
arXiv Detail & Related papers (2022-10-19T12:04:47Z) - Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma
Segmentation and Koos Grade Prediction based on Semi-Supervised Contrastive
Learning [1.5953825926551457]
unsupervised domain adaptation framework for cross-modality vestibular schwannoma (VS) and cochlea segmentation and Koos grade prediction.
nnU-Net model is utilized for VS and cochlea segmentation, while a semi-supervised contrastive learning pre-train approach is employed to improve the model performance.
Our method received rank 4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with Macro-Average Mean Square Error of 0.3941.
arXiv Detail & Related papers (2022-10-09T13:12:20Z) - Enhancing Data Diversity for Self-training Based Unsupervised
Cross-modality Vestibular Schwannoma and Cochlea Segmentation [7.327638441664658]
We present an approach for VS and cochlea segmentation in an unsupervised domain adaptation setting.
We first develop a cross-site cross-modality unpaired image translation strategy to enrich the diversity of the synthesized data.
Then, we devise a rule-based offline augmentation technique to further minimize the domain gap.
Lastly, we adopt a self-configuring segmentation framework empowered by self-training to obtain the final results.
arXiv Detail & Related papers (2022-09-23T22:26:51Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Cross-Modality Domain Adaptation for Vestibular Schwannoma and Cochlea
Segmentation [5.701095097774121]
We propose to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting.
Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance.
Our results on the challenge validation leaderboard showed that our unsupervised method has achieved promising VS and cochlea segmentation performance with mean dice score of 0.8261 $pm$ 0.0416; The mean dice value for the tumor is 0.8302 $pm$ 0.0772.
arXiv Detail & Related papers (2021-09-13T19:24:15Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - UDALM: Unsupervised Domain Adaptation through Language Modeling [79.73916345178415]
We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss.
Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data can be effectively used as a stopping criterion.
Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding $91.74%$ accuracy, which is an $1.11%$ absolute improvement over the state-of-versathe-art.
arXiv Detail & Related papers (2021-04-14T19:05:01Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.