COSMOS: Cross-Modality Unsupervised Domain Adaptation for 3D Medical
Image Segmentation based on Target-aware Domain Translation and Iterative
Self-Training
- URL: http://arxiv.org/abs/2203.16557v2
- Date: Tue, 19 Dec 2023 12:58:02 GMT
- Title: COSMOS: Cross-Modality Unsupervised Domain Adaptation for 3D Medical
Image Segmentation based on Target-aware Domain Translation and Iterative
Self-Training
- Authors: Hyungseob Shin, Hyeongyu Kim, Sewon Kim, Yohan Jun, Taejoon Eo and
Dosik Hwang
- Abstract summary: We propose a self-training based unsupervised domain adaptation framework for 3D medical image segmentation named COSMOS.
Our target-aware contrast conversion network translates source domain annotated T1 MRI to pseudo T2 MRI to enable segmentation training on target domain.
COSMOS won the 1textsuperscriptst place in the Cross-Modality Domain Adaptation (crossMoDA) challenge held in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021)
- Score: 6.513315990156929
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in deep learning-based medical image segmentation studies
achieve nearly human-level performance when in fully supervised condition.
However, acquiring pixel-level expert annotations is extremely expensive and
laborious in medical imaging fields. Unsupervised domain adaptation can
alleviate this problem, which makes it possible to use annotated data in one
imaging modality to train a network that can successfully perform segmentation
on target imaging modality with no labels. In this work, we propose a
self-training based unsupervised domain adaptation framework for 3D medical
image segmentation named COSMOS and validate it with automatic segmentation of
Vestibular Schwannoma (VS) and cochlea on high-resolution T2 Magnetic Resonance
Images (MRI). Our target-aware contrast conversion network translates source
domain annotated T1 MRI to pseudo T2 MRI to enable segmentation training on
target domain, while preserving important anatomical features of interest in
the converted images. Iterative self-training is followed to incorporate
unlabeled data to training and incrementally improve the quality of
pseudo-labels, thereby leading to improved performance of segmentation. COSMOS
won the 1\textsuperscript{st} place in the Cross-Modality Domain Adaptation
(crossMoDA) challenge held in conjunction with the 24th International
Conference on Medical Image Computing and Computer Assisted Intervention
(MICCAI 2021). It achieves mean Dice score and Average Symmetric Surface
Distance of 0.871(0.063) and 0.437(0.270) for VS, and 0.842(0.020) and
0.152(0.030) for cochlea.
Related papers
- DRL-STNet: Unsupervised Domain Adaptation for Cross-modality Medical Image Segmentation via Disentangled Representation Learning [14.846510957922114]
Unsupervised domain adaptation (UDA) is essential for medical image segmentation, especially in cross-modality data scenarios.
This paper presents DRL-STNet, a novel framework for cross-modality medical image segmentation.
The proposed framework exhibits superior performance in abdominal organ segmentation on the FLARE challenge dataset.
arXiv Detail & Related papers (2024-09-26T23:30:40Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation [11.100048696665496]
unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
arXiv Detail & Related papers (2023-03-28T08:55:00Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - C-MADA: Unsupervised Cross-Modality Adversarial Domain Adaptation
framework for medical Image Segmentation [0.8680676599607122]
We present an unsupervised Cross-Modality Adversarial Domain Adaptation (C-MADA) framework for medical image segmentation.
C-MADA implements an image- and feature-level adaptation method in a sequential manner.
It is tested on the task of brain MRI segmentation, obtaining competitive results.
arXiv Detail & Related papers (2021-10-29T14:34:33Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.