Cross-Modality Domain Adaptation for Vestibular Schwannoma and Cochlea
Segmentation
- URL: http://arxiv.org/abs/2109.06274v1
- Date: Mon, 13 Sep 2021 19:24:15 GMT
- Title: Cross-Modality Domain Adaptation for Vestibular Schwannoma and Cochlea
Segmentation
- Authors: Han Liu, Yubo Fan, Can Cui, Dingjie Su, Andrew McNeil, and Benoit
M.Dawant
- Abstract summary: We propose to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting.
Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance.
Our results on the challenge validation leaderboard showed that our unsupervised method has achieved promising VS and cochlea segmentation performance with mean dice score of 0.8261 $pm$ 0.0416; The mean dice value for the tumor is 0.8302 $pm$ 0.0772.
- Score: 5.701095097774121
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic methods to segment the vestibular schwannoma (VS) tumors and the
cochlea from magnetic resonance imaging (MRI) are critical to VS treatment
planning. Although supervised methods have achieved satisfactory performance in
VS segmentation, they require full annotations by experts, which is laborious
and time-consuming. In this work, we aim to tackle the VS and cochlea
segmentation problem in an unsupervised domain adaptation setting. Our proposed
method leverages both the image-level domain alignment to minimize the domain
divergence and semi-supervised training to further boost the performance.
Furthermore, we propose to fuse the labels predicted from multiple models via
noisy label correction. Our results on the challenge validation leaderboard
showed that our unsupervised method has achieved promising VS and cochlea
segmentation performance with mean dice score of 0.8261 $\pm$ 0.0416; The mean
dice value for the tumor is 0.8302 $\pm$ 0.0772. This is comparable to the
weakly-supervised based method.
Related papers
- Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - RCPS: Rectified Contrastive Pseudo Supervision for Semi-Supervised
Medical Image Segmentation [26.933651788004475]
We propose a novel semi-supervised segmentation method named Rectified Contrastive Pseudo Supervision (RCPS)
RCPS combines a rectified pseudo supervision and voxel-level contrastive learning to improve the effectiveness of semi-supervised segmentation.
Experimental results reveal that the proposed method yields better segmentation performance compared with the state-of-the-art methods in semi-supervised medical image segmentation.
arXiv Detail & Related papers (2023-01-13T12:03:58Z) - Self-Supervised Equivariant Regularization Reconciles Multiple Instance
Learning: Joint Referable Diabetic Retinopathy Classification and Lesion
Segmentation [3.1671604920729224]
Lesion appearance is a crucial clue for medical providers to distinguish referable diabetic retinopathy (rDR) from non-referable DR.
Most existing large-scale DR datasets contain only image-level labels rather than pixel-based annotations.
This paper leverages self-supervised equivariant learning and attention-based multi-instance learning to tackle this problem.
We conduct extensive validation experiments on the Eyepacs dataset, achieving an area under the receiver operating characteristic curve (AU ROC) of 0.958, outperforming current state-of-the-art algorithms.
arXiv Detail & Related papers (2022-10-12T06:26:05Z) - Enhancing Data Diversity for Self-training Based Unsupervised
Cross-modality Vestibular Schwannoma and Cochlea Segmentation [7.327638441664658]
We present an approach for VS and cochlea segmentation in an unsupervised domain adaptation setting.
We first develop a cross-site cross-modality unpaired image translation strategy to enrich the diversity of the synthesized data.
Then, we devise a rule-based offline augmentation technique to further minimize the domain gap.
Lastly, we adopt a self-configuring segmentation framework empowered by self-training to obtain the final results.
arXiv Detail & Related papers (2022-09-23T22:26:51Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea
Segmentation via Semi-supervised Learning and Label Fusion [10.456308424227053]
Methods to segment the vestibular schwannoma (VS) tumors and the cochlea from magnetic resonance imaging (MRI) are critical to VS treatment planning.
In this work, we aim to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting.
Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance.
arXiv Detail & Related papers (2022-01-25T22:01:04Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - UDALM: Unsupervised Domain Adaptation through Language Modeling [79.73916345178415]
We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss.
Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data can be effectively used as a stopping criterion.
Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding $91.74%$ accuracy, which is an $1.11%$ absolute improvement over the state-of-versathe-art.
arXiv Detail & Related papers (2021-04-14T19:05:01Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.