Unsupervised domain adaptation for cross-modality liver segmentation via
joint adversarial learning and self-learning
- URL: http://arxiv.org/abs/2109.05664v1
- Date: Mon, 13 Sep 2021 01:46:28 GMT
- Title: Unsupervised domain adaptation for cross-modality liver segmentation via
joint adversarial learning and self-learning
- Authors: Jin Hong, Simon Chun Ho Yu, Weitian Chen
- Abstract summary: Liver segmentation on images acquired using computed tomography (CT) and magnetic resonance imaging (MRI) plays an important role in clinical management of liver diseases.
In this work, we report a novel unsupervised domain adaptation framework for cross-modality liver segmentation via joint adversarial learning and self-learning.
- Score: 2.309675169959214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Liver segmentation on images acquired using computed tomography (CT) and
magnetic resonance imaging (MRI) plays an important role in clinical management
of liver diseases. Compared to MRI, CT images of liver are more abundant and
readily available. However, MRI can provide richer quantitative information of
the liver compared to CT. Thus, it is desirable to achieve unsupervised domain
adaptation for transferring the learned knowledge from the source domain
containing labeled CT images to the target domain containing unlabeled MR
images. In this work, we report a novel unsupervised domain adaptation
framework for cross-modality liver segmentation via joint adversarial learning
and self-learning. We propose joint semantic-aware and shape-entropy-aware
adversarial learning with post-situ identification manner to implicitly align
the distribution of task-related features extracted from the target domain with
those from the source domain. In proposed framework, a network is trained with
the above two adversarial losses in an unsupervised manner, and then a mean
completer of pseudo-label generation is employed to produce pseudo-labels to
train the next network (desired model). Additionally, semantic-aware
adversarial learning and two self-learning methods, including pixel-adaptive
mask refinement and student-to-partner learning, are proposed to train the
desired model. To improve the robustness of the desired model, a low-signal
augmentation function is proposed to transform MRI images as the input of the
desired model to handle hard samples. Using the public data sets, our
experiments demonstrated the proposed unsupervised domain adaptation framework
outperformed four supervised learning methods with a Dice score 0.912 plus or
minus 0.037 (mean plus or minus standard deviation).
Related papers
- Cross-model Mutual Learning for Exemplar-based Medical Image Segmentation [25.874281336821685]
Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
We introduce a novel Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
arXiv Detail & Related papers (2024-04-18T00:18:07Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - MSCDA: Multi-level Semantic-guided Contrast Improves Unsupervised Domain
Adaptation for Breast MRI Segmentation in Small Datasets [5.272836235045653]
We propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation framework.
Our approach incorporates self-training with contrastive learning to align feature representations between domains.
In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts.
arXiv Detail & Related papers (2023-01-04T19:16:55Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.