A 3D Multi-Style Cross-Modality Segmentation Framework for Segmenting
Vestibular Schwannoma and Cochlea
- URL: http://arxiv.org/abs/2311.11578v1
- Date: Mon, 20 Nov 2023 07:29:33 GMT
- Title: A 3D Multi-Style Cross-Modality Segmentation Framework for Segmenting
Vestibular Schwannoma and Cochlea
- Authors: Yuzhou Zhuang
- Abstract summary: The crossMoDA2023 challenge aims to segment the vestibular schwannoma and cochlea regions of unlabeled hrT2 scans by leveraging labeled ceT1 scans.
We propose a 3D multi-style cross-modality segmentation framework for the challenge, including the multi-style translation and self-training segmentation phases.
Our method produces promising results and achieves the mean DSC values of 72.78% and 80.64% on the crossMoDA2023 validation dataset.
- Score: 2.2209333405427585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The crossMoDA2023 challenge aims to segment the vestibular schwannoma
(sub-divided into intra- and extra-meatal components) and cochlea regions of
unlabeled hrT2 scans by leveraging labeled ceT1 scans. In this work, we
proposed a 3D multi-style cross-modality segmentation framework for the
crossMoDA2023 challenge, including the multi-style translation and
self-training segmentation phases. Considering heterogeneous distributions and
various image sizes in multi-institutional scans, we first utilize the min-max
normalization, voxel size resampling, and center cropping to obtain fixed-size
sub-volumes from ceT1 and hrT2 scans for training. Then, we perform the
multi-style image translation phase to overcome the intensity distribution
discrepancy between unpaired multi-modal scans. Specifically, we design three
different translation networks with 2D or 2.5D inputs to generate multi-style
and realistic target-like volumes from labeled ceT1 volumes. Finally, we
perform the self-training volumetric segmentation phase in the target domain,
which employs the nnU-Net framework and iterative self-training method using
pseudo-labels for training accurate segmentation models in the unlabeled target
domain. On the crossMoDA2023 validation dataset, our method produces promising
results and achieves the mean DSC values of 72.78% and 80.64% and ASSD values
of 5.85 mm and 0.25 mm for VS tumor and cochlea regions, respectively.
Moreover, for intra- and extra-meatal regions, our method achieves the DSC
values of 59.77% and 77.14%, respectively.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation [11.100048696665496]
unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
arXiv Detail & Related papers (2023-03-28T08:55:00Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - An Unpaired Cross-modality Segmentation Framework Using Data
Augmentation and Hybrid Convolutional Networks for Segmenting Vestibular
Schwannoma and Cochlea [7.7150383247700605]
The crossMoDA challenge aims to automatically segment the vestibular schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans.
The 2022 edition extends the segmentation task by including multi-institutional scans.
We propose an unpaired cross-modality segmentation framework using data augmentation and hybrid convolutional networks.
arXiv Detail & Related papers (2022-11-28T01:15:33Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation
techniques for Vestibular Schwnannoma and Cochlea Segmentation [43.372468317829004]
Domain Adaptation (DA) has recently raised strong interests in the medical imaging community.
To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised.
CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA.
arXiv Detail & Related papers (2022-01-08T14:00:34Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Joint Semi-supervised 3D Super-Resolution and Segmentation with Mixed
Adversarial Gaussian Domain Adaptation [13.477290490742224]
Super-resolution in medical imaging aims to increase the resolution of images but is conventionally trained on features from low resolution datasets.
Here we propose a semi-supervised multi-task generative adversarial network (Gemini-GAN) that performs joint super-resolution of the images and their labels.
Our proposed approach is extensively evaluated on two transnational multi-ethnic populations of 1,331 and 205 adults respectively.
arXiv Detail & Related papers (2021-07-16T15:42:39Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.