ICMSC: Intra- and Cross-modality Semantic Consistency for Unsupervised
Domain Adaptation on Hip Joint Bone Segmentation
- URL: http://arxiv.org/abs/2012.12570v1
- Date: Wed, 23 Dec 2020 09:58:38 GMT
- Title: ICMSC: Intra- and Cross-modality Semantic Consistency for Unsupervised
Domain Adaptation on Hip Joint Bone Segmentation
- Authors: Guodong Zeng, Till D. Lerch, Florian Schmaranzer, Guoyan Zheng,
Juergen Burger, Kate Gerber, Moritz Tannast, Klaus Siebenrock, Nicolas Gerber
- Abstract summary: We propose intra- and cross-modality semantic consistency (ICMSC) for UDA.
Our proposed method achieves an average DICE of 81.61% on the acetabulum and 88.16% on the proximal femur.
Without UDA, a model trained on CT for hip joint bone segmentation is non-transferable to MRI and has almost zero-DICE segmentation.
- Score: 1.4148874598036136
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Unsupervised domain adaptation (UDA) for cross-modality medical image
segmentation has shown great progress by domain-invariant feature learning or
image appearance translation. Adapted feature learning usually cannot detect
domain shifts at the pixel level and is not able to achieve good results in
dense semantic segmentation tasks. Image appearance translation, e.g. CycleGAN,
translates images into different styles with good appearance, despite its
population, its semantic consistency is hardly to maintain and results in poor
cross-modality segmentation. In this paper, we propose intra- and
cross-modality semantic consistency (ICMSC) for UDA and our key insight is that
the segmentation of synthesised images in different styles should be
consistent. Specifically, our model consists of an image translation module and
a domain-specific segmentation module. The image translation module is a
standard CycleGAN, while the segmentation module contains two domain-specific
segmentation networks. The intra-modality semantic consistency (IMSC) forces
the reconstructed image after a cycle to be segmented in the same way as the
original input image, while the cross-modality semantic consistency (CMSC)
encourages the synthesized images after translation to be segmented exactly the
same as before translation. Comprehensive experimental results on
cross-modality hip joint bone segmentation show the effectiveness of our
proposed method, which achieves an average DICE of 81.61% on the acetabulum and
88.16% on the proximal femur, outperforming other state-of-the-art methods. It
is worth to note that without UDA, a model trained on CT for hip joint bone
segmentation is non-transferable to MRI and has almost zero-DICE segmentation.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation [11.100048696665496]
unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
arXiv Detail & Related papers (2023-03-28T08:55:00Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Instance Segmentation of Unlabeled Modalities via Cyclic Segmentation
GAN [27.936725483892076]
We propose a novel Cyclic Generative Adrial Network (CySGAN) that conducts image translation and instance segmentation jointly.
We benchmark our approach on the task of 3D neuronal nuclei segmentation with annotated electron microscopy (EM) images and unlabeled expansion microscopy (ExM) data.
arXiv Detail & Related papers (2022-04-06T20:46:39Z) - C-MADA: Unsupervised Cross-Modality Adversarial Domain Adaptation
framework for medical Image Segmentation [0.8680676599607122]
We present an unsupervised Cross-Modality Adversarial Domain Adaptation (C-MADA) framework for medical image segmentation.
C-MADA implements an image- and feature-level adaptation method in a sequential manner.
It is tested on the task of brain MRI segmentation, obtaining competitive results.
arXiv Detail & Related papers (2021-10-29T14:34:33Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Boosting Few-shot Semantic Segmentation with Transformers [81.43459055197435]
TRansformer-based Few-shot Semantic segmentation method (TRFS)
Our model consists of two modules: Global Enhancement Module (GEM) and Local Enhancement Module (LEM)
arXiv Detail & Related papers (2021-08-04T20:09:21Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Semantic Distribution-aware Contrastive Adaptation for Semantic
Segmentation [50.621269117524925]
Domain adaptive semantic segmentation refers to making predictions on a certain target domain with only annotations of a specific source domain.
We present a semantic distribution-aware contrastive adaptation algorithm that enables pixel-wise representation alignment.
We evaluate SDCA on multiple benchmarks, achieving considerable improvements over existing algorithms.
arXiv Detail & Related papers (2021-05-11T13:21:25Z) - Segmentation-Renormalized Deep Feature Modulation for Unpaired Image
Harmonization [0.43012765978447565]
Cycle-consistent Generative Adversarial Networks have been used to harmonize image sets between a source and target domain.
These methods are prone to instability, contrast inversion, intractable manipulation of pathology, and steganographic mappings which limit their reliable adoption in real-world medical imaging.
We propose a segmentation-renormalized image translation framework to reduce inter-scanner harmonization while preserving anatomical layout.
arXiv Detail & Related papers (2021-02-11T23:53:51Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.