An Unpaired Cross-modality Segmentation Framework Using Data
Augmentation and Hybrid Convolutional Networks for Segmenting Vestibular
Schwannoma and Cochlea
- URL: http://arxiv.org/abs/2211.14986v1
- Date: Mon, 28 Nov 2022 01:15:33 GMT
- Title: An Unpaired Cross-modality Segmentation Framework Using Data
Augmentation and Hybrid Convolutional Networks for Segmenting Vestibular
Schwannoma and Cochlea
- Authors: Yuzhou Zhuang, Hong Liu, Enmin Song, Coskun Cetinkaya, and Chih-Cheng
Hung
- Abstract summary: The crossMoDA challenge aims to automatically segment the vestibular schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans.
The 2022 edition extends the segmentation task by including multi-institutional scans.
We propose an unpaired cross-modality segmentation framework using data augmentation and hybrid convolutional networks.
- Score: 7.7150383247700605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The crossMoDA challenge aims to automatically segment the vestibular
schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans
by leveraging labeled contrast-enhanced T1 scans. The 2022 edition extends the
segmentation task by including multi-institutional scans. In this work, we
proposed an unpaired cross-modality segmentation framework using data
augmentation and hybrid convolutional networks. Considering heterogeneous
distributions and various image sizes for multi-institutional scans, we apply
the min-max normalization for scaling the intensities of all scans between -1
and 1, and use the voxel size resampling and center cropping to obtain
fixed-size sub-volumes for training. We adopt two data augmentation methods for
effectively learning the semantic information and generating realistic target
domain scans: generative and online data augmentation. For generative data
augmentation, we use CUT and CycleGAN to generate two groups of realistic T2
volumes with different details and appearances for supervised segmentation
training. For online data augmentation, we design a random tumor signal
reducing method for simulating the heterogeneity of VS tumor signals.
Furthermore, we utilize an advanced hybrid convolutional network with
multi-dimensional convolutions to adaptively learn sparse inter-slice
information and dense intra-slice information for accurate volumetric
segmentation of VS tumor and cochlea regions in anisotropic scans. On the
crossMoDA2022 validation dataset, our method produces promising results and
achieves the mean DSC values of 72.47% and 76.48% and ASSD values of 3.42 mm
and 0.53 mm for VS tumor and cochlea regions, respectively.
Related papers
- Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - A 3D Multi-Style Cross-Modality Segmentation Framework for Segmenting
Vestibular Schwannoma and Cochlea [2.2209333405427585]
The crossMoDA2023 challenge aims to segment the vestibular schwannoma and cochlea regions of unlabeled hrT2 scans by leveraging labeled ceT1 scans.
We propose a 3D multi-style cross-modality segmentation framework for the challenge, including the multi-style translation and self-training segmentation phases.
Our method produces promising results and achieves the mean DSC values of 72.78% and 80.64% on the crossMoDA2023 validation dataset.
arXiv Detail & Related papers (2023-11-20T07:29:33Z) - Tissue Segmentation of Thick-Slice Fetal Brain MR Scans with Guidance
from High-Quality Isotropic Volumes [52.242103848335354]
We propose a novel Cycle-Consistent Domain Adaptation Network (C2DA-Net) to efficiently transfer the knowledge learned from high-quality isotropic volumes for accurate tissue segmentation of thick-slice scans.
Our C2DA-Net can fully utilize a small set of annotated isotropic volumes to guide tissue segmentation on unannotated thick-slice scans.
arXiv Detail & Related papers (2023-08-13T12:51:15Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Semi-Supervised Hybrid Spine Network for Segmentation of Spine MR Images [14.190504802866288]
We propose a two-stage algorithm, named semi-supervised hybrid spine network (SSHSNet) to achieve simultaneous vertebral bodies (VBs) and intervertebral discs (IVDs) segmentation.
In the first stage, we constructed a 2D semi-supervised DeepLabv3+ by using cross pseudo supervision to obtain intra-slice features and coarse segmentation.
In the second stage, a 3D full-resolution patch-based DeepLabv3+ was built to extract inter-slice information.
Results show that the proposed method has great potential in dealing with the data imbalance problem
arXiv Detail & Related papers (2022-03-23T02:57:14Z) - Shape-consistent Generative Adversarial Networks for multi-modal Medical
segmentation maps [10.781866671930857]
We present a segmentation network using synthesised cardiac volumes for extremely limited datasets.
Our solution is based on a 3D cross-modality generative adversarial network to share information between modalities.
We show that improved segmentation can be achieved on small datasets when using spatial augmentations.
arXiv Detail & Related papers (2022-01-24T13:57:31Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.