Unified cross-modality feature disentangler for unsupervised
multi-domain MRI abdomen organs segmentation
- URL: http://arxiv.org/abs/2007.09669v1
- Date: Sun, 19 Jul 2020 13:33:41 GMT
- Title: Unified cross-modality feature disentangler for unsupervised
multi-domain MRI abdomen organs segmentation
- Authors: Jue Jiang and Harini Veeraraghavan
- Abstract summary: Our contribution is a unified cross-modality feature disentagling approach for multi-domain image translation and multiple organ segmentation.
Using CT as the labeled source domain, our approach learns to segment multi-modal (T1-weighted and T2-weighted) MRI having no labeled data.
Our approach produced an average Dice similarity coefficient (DSC) of 0.85 for T1w and 0.90 for T2w MRI for multi-organ segmentation.
- Score: 3.3504365823045044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our contribution is a unified cross-modality feature disentagling approach
for multi-domain image translation and multiple organ segmentation. Using CT as
the labeled source domain, our approach learns to segment multi-modal
(T1-weighted and T2-weighted) MRI having no labeled data. Our approach uses a
variational auto-encoder (VAE) to disentangle the image content from style. The
VAE constrains the style feature encoding to match a universal prior (Gaussian)
that is assumed to span the styles of all the source and target modalities. The
extracted image style is converted into a latent style scaling code, which
modulates the generator to produce multi-modality images according to the
target domain code from the image content features. Finally, we introduce a
joint distribution matching discriminator that combines the translated images
with task-relevant segmentation probability maps to further constrain and
regularize image-to-image (I2I) translations. We performed extensive
comparisons to multiple state-of-the-art I2I translation and segmentation
methods. Our approach resulted in the lowest average multi-domain image
reconstruction error of 1.34$\pm$0.04. Our approach produced an average Dice
similarity coefficient (DSC) of 0.85 for T1w and 0.90 for T2w MRI for
multi-organ segmentation, which was highly comparable to a fully supervised MRI
multi-organ segmentation network (DSC of 0.86 for T1w and 0.90 for T2w MRI).
Related papers
- I2I-Galip: Unsupervised Medical Image Translation Using Generative Adversarial CLIP [30.506544165999564]
Unpaired image-to-image translation is a challenging task due to the absence of paired examples.
We propose a new image-to-image translation framework named Image-to-Image-Generative-Adversarial-CLIP (I2I-Galip)
arXiv Detail & Related papers (2024-09-19T01:44:50Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation
for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation [11.100048696665496]
unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance.
We propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures.
Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4%.
arXiv Detail & Related papers (2023-03-28T08:55:00Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - ICMSC: Intra- and Cross-modality Semantic Consistency for Unsupervised
Domain Adaptation on Hip Joint Bone Segmentation [1.4148874598036136]
We propose intra- and cross-modality semantic consistency (ICMSC) for UDA.
Our proposed method achieves an average DICE of 81.61% on the acetabulum and 88.16% on the proximal femur.
Without UDA, a model trained on CT for hip joint bone segmentation is non-transferable to MRI and has almost zero-DICE segmentation.
arXiv Detail & Related papers (2020-12-23T09:58:38Z) - PSIGAN: Joint probabilistic segmentation and image distribution matching
for unpaired cross-modality adaptation based MRI segmentation [4.573421102994323]
We develop a new joint probabilistic segmentation and image distribution matching generative adversarial network (PSIGAN)
Our UDA approach models the co-dependency between images and their segmentation as a joint probability distribution.
Our method achieved an overall average DSC of 0.87 on T1w and 0.90 on T2w for the abdominal organs.
arXiv Detail & Related papers (2020-07-18T16:23:02Z) - GMM-UNIT: Unsupervised Multi-Domain and Multi-Modal Image-to-Image
Translation via Attribute Gaussian Mixture Modeling [66.50914391679375]
Unsupervised image-to-image translation (UNIT) aims at learning a mapping between several visual domains by using unpaired training images.
Recent studies have shown remarkable success for multiple domains but they suffer from two main limitations.
We propose a method named GMM-UNIT, which is based on a content-attribute disentangled representation where the space is fitted with a GMM.
arXiv Detail & Related papers (2020-03-15T10:18:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.