Self-Training Based Unsupervised Cross-Modality Domain Adaptation for
Vestibular Schwannoma and Cochlea Segmentation
- URL: http://arxiv.org/abs/2109.10674v1
- Date: Wed, 22 Sep 2021 12:04:41 GMT
- Title: Self-Training Based Unsupervised Cross-Modality Domain Adaptation for
Vestibular Schwannoma and Cochlea Segmentation
- Authors: Hyungseob Shin, Hyeongyu Kim, Sewon Kim, Yohan Jun, Taejoon Eo, Dosik
Hwang
- Abstract summary: We propose a self-training based unsupervised-learning framework that performs automatic segmentation of Vestibular Schwannoma (VS) and cochlea on high-resolution T2 scans.
Our method consists of 4 main stages: 1) VS-preserving contrast conversion from contrast-enhanced T1 scan to high-resolution T2 scan, 2) training segmentation on generated T2 scans with annotations on T1 scans, and 3) Inferring pseudo-labels on non-annotated real T2 scans.
Our method showed mean Dice score and Average Symmetric Surface Distance (ASSD) of 0.8570 (0.0705
- Score: 0.2609784101826761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advances of deep learning, many medical image segmentation studies
achieve human-level performance when in fully supervised condition. However, it
is extremely expensive to acquire annotation on every data in medical fields,
especially on magnetic resonance images (MRI) that comprise many different
contrasts. Unsupervised methods can alleviate this problem; however, the
performance drop is inevitable compared to fully supervised methods. In this
work, we propose a self-training based unsupervised-learning framework that
performs automatic segmentation of Vestibular Schwannoma (VS) and cochlea on
high-resolution T2 scans. Our method consists of 4 main stages: 1)
VS-preserving contrast conversion from contrast-enhanced T1 scan to
high-resolution T2 scan, 2) training segmentation on generated T2 scans with
annotations on T1 scans, and 3) Inferring pseudo-labels on non-annotated real
T2 scans, and 4) boosting the generalizability of VS and cochlea segmentation
by training with combined data (i.e., real T2 scans with pseudo-labels and
generated T2 scans with true annotations). Our method showed mean Dice score
and Average Symmetric Surface Distance (ASSD) of 0.8570 (0.0705) and 0.4970
(0.3391) for VS, 0.8446 (0.0211) and 0.1513 (0.0314) for Cochlea on
CrossMoDA2021 challenge validation phase leaderboard, outperforming most other
approaches.
Related papers
- Improved 3D Whole Heart Geometry from Sparse CMR Slices [3.701571763780745]
Cardiac magnetic resonance (CMR) imaging and computed tomography (CT) are two common non-invasive imaging methods for assessing patients with cardiovascular disease.
CMR typically acquires multiple sparse 2D slices, with unavoidable respiratory motion artefacts between slices, whereas CT acquires isotropic dense data but uses ionising radiation.
We explore the combination of Slice Shifting Algorithm (SSA), Spatial Transformer Network (STN), and Label Transformer Network (LTN) to: 1) correct respiratory motion between segmented slices, and 2) transform sparse segmentation data into dense segmentation.
arXiv Detail & Related papers (2024-08-14T13:03:48Z) - Koos Classification of Vestibular Schwannoma via Image Translation-Based
Unsupervised Cross-Modality Domain Adaptation [5.81371357700742]
We propose an unsupervised cross-modality domain adaptation method based on im-age translation.
The proposed method received rank 1 on the Koos classification task of the Cross-Modality Domain Adaptation (crossMoDA 2022) challenge.
arXiv Detail & Related papers (2023-03-14T07:25:38Z) - Weakly Unsupervised Domain Adaptation for Vestibular Schwannoma
Segmentation [0.0]
Vestibular schwannoma (VS) is a non-cancerous tumor located next to the ear that can cause hearing loss.
As hrT2 images are currently scarce, it is less likely to train robust machine learning models to segment VS or other brain structures.
We propose a weakly supervised machine learning approach that learns from only ceT1 scans and adapts to segment two structures from hrT2 scans.
arXiv Detail & Related papers (2023-03-13T13:23:57Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Self-supervised contrastive learning of echocardiogram videos enables
label-efficient cardiac disease diagnosis [48.64462717254158]
We developed a self-supervised contrastive learning approach, EchoCLR, to catered to echocardiogram videos.
When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS)
EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets.
arXiv Detail & Related papers (2022-07-23T19:17:26Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Joint Registration and Segmentation via Multi-Task Learning for Adaptive
Radiotherapy of Prostate Cancer [3.0929226049096217]
We formulate registration and segmentation as a joint problem via a Multi-Task Learning setting.
We study this approach in the context of adaptive image-guided radiotherapy for prostate cancer.
arXiv Detail & Related papers (2021-05-05T02:45:49Z) - Dual-Consistency Semi-Supervised Learning with Uncertainty
Quantification for COVID-19 Lesion Segmentation from CT Images [49.1861463923357]
We propose an uncertainty-guided dual-consistency learning network (UDC-Net) for semi-supervised COVID-19 lesion segmentation from CT images.
Our proposed UDC-Net improves the fully supervised method by 6.3% in Dice and outperforms other competitive semi-supervised approaches by significant margins.
arXiv Detail & Related papers (2021-04-07T16:23:35Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.