Koos Classification of Vestibular Schwannoma via Image Translation-Based
Unsupervised Cross-Modality Domain Adaptation
- URL: http://arxiv.org/abs/2303.07674v1
- Date: Tue, 14 Mar 2023 07:25:38 GMT
- Title: Koos Classification of Vestibular Schwannoma via Image Translation-Based
Unsupervised Cross-Modality Domain Adaptation
- Authors: Tao Yang and Lisheng Wang
- Abstract summary: We propose an unsupervised cross-modality domain adaptation method based on im-age translation.
The proposed method received rank 1 on the Koos classification task of the Cross-Modality Domain Adaptation (crossMoDA 2022) challenge.
- Score: 5.81371357700742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Koos grading scale is a classification system for vestibular schwannoma
(VS) used to characterize the tumor and its effects on adjacent brain
structures. The Koos classification captures many of the characteristics of
treatment deci-sions and is often used to determine treatment plans. Although
both contrast-enhanced T1 (ceT1) scanning and high-resolution T2 (hrT2)
scanning can be used for Koos Classification, hrT2 scanning is gaining interest
because of its higher safety and cost-effectiveness. However, in the absence of
annotations for hrT2 scans, deep learning methods often inevitably suffer from
performance deg-radation due to unsupervised learning. If ceT1 scans and their
annotations can be used for unsupervised learning of hrT2 scans, the
performance of Koos classifi-cation using unlabeled hrT2 scans will be greatly
improved. In this regard, we propose an unsupervised cross-modality domain
adaptation method based on im-age translation by transforming annotated ceT1
scans into hrT2 modality and us-ing their annotations to achieve supervised
learning of hrT2 modality. Then, the VS and 7 adjacent brain structures related
to Koos classification in hrT2 scans were segmented. Finally, handcrafted
features are extracted from the segmenta-tion results, and Koos grade is
classified using a random forest classifier. The proposed method received rank
1 on the Koos classification task of the Cross-Modality Domain Adaptation
(crossMoDA 2022) challenge, with Macro-Averaged Mean Absolute Error (MA-MAE) of
0.2148 for the validation set and 0.26 for the test set.
Related papers
- Thyroidiomics: An Automated Pipeline for Segmentation and Classification of Thyroid Pathologies from Scintigraphy Images [0.23960026858846614]
The objective of this study was to develop an automated pipeline that enhances thyroid disease classification using thyroid scintigraphy images.
Anterior thyroid scintigraphy images from 2,643 patients were collected and categorized into diffuse goiter (DG), multinodal goiter (MNG), and thyroiditis (TH)
The pipeline demonstrated comparable performance to physician segmentations on several classification metrics across different classes.
arXiv Detail & Related papers (2024-07-14T21:29:28Z) - Class Activation Map-based Weakly supervised Hemorrhage Segmentation
using Resnet-LSTM in Non-Contrast Computed Tomography images [0.06269281581001895]
Intracranial hemorrhages (ICH) are routinely diagnosed using non-contrast CT (NCCT) for severity assessment.
Deep learning (DL)-based methods have shown great potential, however, training them requires a huge amount of manually annotated lesion-level labels.
We propose a novel weakly supervised DL method for ICH segmentation on NCCT scans, using image-level binary classification labels.
arXiv Detail & Related papers (2023-09-28T17:32:19Z) - Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection [98.66771688028426]
We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
arXiv Detail & Related papers (2023-03-27T07:46:58Z) - Weakly Unsupervised Domain Adaptation for Vestibular Schwannoma
Segmentation [0.0]
Vestibular schwannoma (VS) is a non-cancerous tumor located next to the ear that can cause hearing loss.
As hrT2 images are currently scarce, it is less likely to train robust machine learning models to segment VS or other brain structures.
We propose a weakly supervised machine learning approach that learns from only ceT1 scans and adapts to segment two structures from hrT2 scans.
arXiv Detail & Related papers (2023-03-13T13:23:57Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy
Medical Imaging [67.02500668641831]
Deep learning models trained on noisy datasets are sensitive to the noise type and lead to less generalization on unseen samples.
We propose a Robust Knowledge Distillation (RoS-KD) framework which mimics the notion of learning a topic from multiple sources to ensure deterrence in learning noisy information.
RoS-KD learns a smooth, well-informed, and robust student manifold by distilling knowledge from multiple teachers trained on overlapping subsets of training data.
arXiv Detail & Related papers (2022-10-15T22:32:20Z) - Self-Training Based Unsupervised Cross-Modality Domain Adaptation for
Vestibular Schwannoma and Cochlea Segmentation [0.2609784101826761]
We propose a self-training based unsupervised-learning framework that performs automatic segmentation of Vestibular Schwannoma (VS) and cochlea on high-resolution T2 scans.
Our method consists of 4 main stages: 1) VS-preserving contrast conversion from contrast-enhanced T1 scan to high-resolution T2 scan, 2) training segmentation on generated T2 scans with annotations on T1 scans, and 3) Inferring pseudo-labels on non-annotated real T2 scans.
Our method showed mean Dice score and Average Symmetric Surface Distance (ASSD) of 0.8570 (0.0705
arXiv Detail & Related papers (2021-09-22T12:04:41Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Dual-Consistency Semi-Supervised Learning with Uncertainty
Quantification for COVID-19 Lesion Segmentation from CT Images [49.1861463923357]
We propose an uncertainty-guided dual-consistency learning network (UDC-Net) for semi-supervised COVID-19 lesion segmentation from CT images.
Our proposed UDC-Net improves the fully supervised method by 6.3% in Dice and outperforms other competitive semi-supervised approaches by significant margins.
arXiv Detail & Related papers (2021-04-07T16:23:35Z) - Learning Invariant Representations across Domains and Tasks [81.30046935430791]
We propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.
TAN significantly increases the recall and F1 score by 5.0% and 7.8% compared to recently strong baselines.
arXiv Detail & Related papers (2021-03-03T11:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.