CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation
techniques for Vestibular Schwnannoma and Cochlea Segmentation
- URL: http://arxiv.org/abs/2201.02831v1
- Date: Sat, 8 Jan 2022 14:00:34 GMT
- Title: CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation
techniques for Vestibular Schwnannoma and Cochlea Segmentation
- Authors: Reuben Dorent, Aaron Kujawa, Marina Ivory, Spyridon Bakas, Nicola
Rieke, Samuel Joutard, Ben Glocker, Jorge Cardoso, Marc Modat, Kayhan
Batmanghelich, Arseniy Belkov, Maria Baldeon Calisto, Jae Won Choi, Benoit M.
Dawant, Hexin Dong, Sergio Escalera, Yubo Fan, Lasse Hansen, Mattias P.
Heinrich, Smriti Joshi, Victoriya Kashtanova, Hyeon Gyu Kim, Satoshi Kondo,
Christian N. Kruse, Susana K. Lai-Yuen, Hao Li, Han Liu, Buntheng Ly, Ipek
Oguz, Hyungseob Shin, Boris Shirokikh, Zixian Su, Guotai Wang, Jianghao Wu,
Yanwu Xu, Kai Yao, Li Zhang, Sebastien Ourselin, Jonathan Shapey, Tom
Vercauteren
- Abstract summary: Domain Adaptation (DA) has recently raised strong interests in the medical imaging community.
To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised.
CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA.
- Score: 43.372468317829004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain Adaptation (DA) has recently raised strong interests in the medical
imaging community. While a large variety of DA techniques has been proposed for
image segmentation, most of these techniques have been validated either on
private datasets or on small publicly available datasets. Moreover, these
datasets mostly addressed single-class problems. To tackle these limitations,
the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in
conjunction with the 24th International Conference on Medical Image Computing
and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large
and multi-class benchmark for unsupervised cross-modality DA. The challenge's
goal is to segment two key brain structures involved in the follow-up and
treatment planning of vestibular schwannoma (VS): the VS and the cochleas.
Currently, the diagnosis and surveillance in patients with VS are performed
using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in
using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore,
we created an unsupervised cross-modality segmentation benchmark. The training
set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105).
The aim was to automatically perform unilateral VS and bilateral cochlea
segmentation on hrT2 as provided in the testing set (N=137). A total of 16
teams submitted their algorithm for the evaluation phase. The level of
performance reached by the top-performing teams is strikingly high (best median
Dice - VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice -
VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an
image-to-image translation approach to transform the source-domain images into
pseudo-target-domain images. A segmentation network was then trained using
these generated images and the manual annotations provided for the source
image.
Related papers
- SMILE-UHURA Challenge -- Small Vessel Segmentation at Mesoscopic Scale from Ultra-High Resolution 7T Magnetic Resonance Angiograms [60.35639972035727]
The lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI.
Dice scores reached up to 0.838 $pm$ 0.066 and 0.716 $pm$ 0.125 on the respective datasets, with an average performance of up to 0.804 $pm$ 0.15.
arXiv Detail & Related papers (2024-11-14T17:06:00Z) - DRL-STNet: Unsupervised Domain Adaptation for Cross-modality Medical Image Segmentation via Disentangled Representation Learning [14.846510957922114]
Unsupervised domain adaptation (UDA) is essential for medical image segmentation, especially in cross-modality data scenarios.
This paper presents DRL-STNet, a novel framework for cross-modality medical image segmentation.
The proposed framework exhibits superior performance in abdominal organ segmentation on the FLARE challenge dataset.
arXiv Detail & Related papers (2024-09-26T23:30:40Z) - A 3D Multi-Style Cross-Modality Segmentation Framework for Segmenting
Vestibular Schwannoma and Cochlea [2.2209333405427585]
The crossMoDA2023 challenge aims to segment the vestibular schwannoma and cochlea regions of unlabeled hrT2 scans by leveraging labeled ceT1 scans.
We propose a 3D multi-style cross-modality segmentation framework for the challenge, including the multi-style translation and self-training segmentation phases.
Our method produces promising results and achieves the mean DSC values of 72.78% and 80.64% on the crossMoDA2023 validation dataset.
arXiv Detail & Related papers (2023-11-20T07:29:33Z) - MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation [11.100048696665496]
unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
arXiv Detail & Related papers (2023-03-28T08:55:00Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - FetReg2021: A Challenge on Placental Vessel Segmentation and
Registration in Fetoscopy [52.3219875147181]
Fetoscopic laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS)
The procedure is particularly challenging due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility, and variability in illumination.
Computer-assisted intervention (CAI) can provide surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking.
Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fet
arXiv Detail & Related papers (2022-06-24T23:44:42Z) - COSMOS: Cross-Modality Unsupervised Domain Adaptation for 3D Medical
Image Segmentation based on Target-aware Domain Translation and Iterative
Self-Training [6.513315990156929]
We propose a self-training based unsupervised domain adaptation framework for 3D medical image segmentation named COSMOS.
Our target-aware contrast conversion network translates source domain annotated T1 MRI to pseudo T2 MRI to enable segmentation training on target domain.
COSMOS won the 1textsuperscriptst place in the Cross-Modality Domain Adaptation (crossMoDA) challenge held in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021)
arXiv Detail & Related papers (2022-03-30T18:00:07Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.