Weakly Unsupervised Domain Adaptation for Vestibular Schwannoma
Segmentation
- URL: http://arxiv.org/abs/2303.07093v1
- Date: Mon, 13 Mar 2023 13:23:57 GMT
- Title: Weakly Unsupervised Domain Adaptation for Vestibular Schwannoma
Segmentation
- Authors: Shahad Hardan and Hussain Alasmawi and Xiangjian Hou and Mohammad
Yaqub
- Abstract summary: Vestibular schwannoma (VS) is a non-cancerous tumor located next to the ear that can cause hearing loss.
As hrT2 images are currently scarce, it is less likely to train robust machine learning models to segment VS or other brain structures.
We propose a weakly supervised machine learning approach that learns from only ceT1 scans and adapts to segment two structures from hrT2 scans.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Vestibular schwannoma (VS) is a non-cancerous tumor located next to the ear
that can cause hearing loss. Most brain MRI images acquired from patients are
contrast-enhanced T1 (ceT1), with a growing interest in high-resolution T2
images (hrT2) to replace ceT1, which involves the use of a contrast agent. As
hrT2 images are currently scarce, it is less likely to train robust machine
learning models to segment VS or other brain structures. In this work, we
propose a weakly supervised machine learning approach that learns from only
ceT1 scans and adapts to segment two structures from hrT2 scans: the VS and the
cochlea from the crossMoDA dataset. Our model 1) generates fake hrT2 scans from
ceT1 images and segmentation masks, 2) is trained using the fake hrT2 scans, 3)
predicts the augmented real hrT2 scans, and 4) is retrained again using both
the fake and real hrT2. The final result of this model has been computed on an
unseen testing dataset provided by the 2022 crossMoDA challenge organizers. The
mean dice score and average symmetric surface distance (ASSD) are 0.78 and
0.46, respectively. The predicted segmentation masks achieved a dice score of
0.83 and an ASSD of 0.56 on the VS, and a dice score of 0.74 and an ASSD of
0.35 on the cochleas.
Related papers
- Lung-CADex: Fully automatic Zero-Shot Detection and Classification of Lung Nodules in Thoracic CT Images [45.29301790646322]
Computer-aided diagnosis can help with early lung nodul detection and facilitate subsequent nodule characterization.
We propose CADe, for segmenting lung nodules in a zero-shot manner using a variant of the Segment Anything Model called MedSAM.
We also propose, CADx, a method for the nodule characterization as benign/malignant by making a gallery of radiomic features and aligning image-feature pairs through contrastive learning.
arXiv Detail & Related papers (2024-07-02T19:30:25Z) - Large-Scale Multi-Center CT and MRI Segmentation of Pancreas with Deep Learning [20.043497517241992]
Automated volumetric segmentation of the pancreas is needed for diagnosis and follow-up of pancreatic diseases.
We developed PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation.
For segmentation accuracy, we achieved Dice coefficients of 88.3% (std: 7.2%, at case level) with CT, 85.0% (std: 7.9%, at case level) with T1W MRI, and 86.3% (std: 6.4%) with T2W MRI.
arXiv Detail & Related papers (2024-05-20T20:37:27Z) - CT-GLIP: 3D Grounded Language-Image Pretraining with CT Scans and Radiology Reports for Full-Body Scenarios [53.94122089629544]
We introduce CT-GLIP (Grounded Language-Image Pretraining with CT scans), a novel method that constructs organ-level image-text pairs to enhance multimodal contrastive learning.
Our method, trained on a multimodal CT dataset comprising 44,011 organ-level vision-text pairs from 17,702 patients across 104 organs, demonstrates it can identify organs and abnormalities in a zero-shot manner using natural languages.
arXiv Detail & Related papers (2024-04-23T17:59:01Z) - Koos Classification of Vestibular Schwannoma via Image Translation-Based
Unsupervised Cross-Modality Domain Adaptation [5.81371357700742]
We propose an unsupervised cross-modality domain adaptation method based on im-age translation.
The proposed method received rank 1 on the Koos classification task of the Cross-Modality Domain Adaptation (crossMoDA 2022) challenge.
arXiv Detail & Related papers (2023-03-14T07:25:38Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - A Novel Mask R-CNN Model to Segment Heterogeneous Brain Tumors through
Image Subtraction [0.0]
We propose using a method performed by radiologists called image segmentation and applying it to machine learning models to prove a better segmentation.
Using Mask R-CNN, its ResNet backbone being pre-trained on the RSNA pneumonia detection challenge dataset, we can train a model on the Brats 2020 Brain Tumor dataset.
We can see how well the method of image subtraction works by comparing it to models without image subtraction through DICE coefficient (F1 score), recall, and precision on the untouched test set.
arXiv Detail & Related papers (2022-04-04T01:45:11Z) - CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation
techniques for Vestibular Schwnannoma and Cochlea Segmentation [43.372468317829004]
Domain Adaptation (DA) has recently raised strong interests in the medical imaging community.
To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised.
CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA.
arXiv Detail & Related papers (2022-01-08T14:00:34Z) - Self-Training Based Unsupervised Cross-Modality Domain Adaptation for
Vestibular Schwannoma and Cochlea Segmentation [0.2609784101826761]
We propose a self-training based unsupervised-learning framework that performs automatic segmentation of Vestibular Schwannoma (VS) and cochlea on high-resolution T2 scans.
Our method consists of 4 main stages: 1) VS-preserving contrast conversion from contrast-enhanced T1 scan to high-resolution T2 scan, 2) training segmentation on generated T2 scans with annotations on T1 scans, and 3) Inferring pseudo-labels on non-annotated real T2 scans.
Our method showed mean Dice score and Average Symmetric Surface Distance (ASSD) of 0.8570 (0.0705
arXiv Detail & Related papers (2021-09-22T12:04:41Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.