Mind The Gap: Alleviating Local Imbalance for Unsupervised
Cross-Modality Medical Image Segmentation
- URL: http://arxiv.org/abs/2205.11888v1
- Date: Tue, 24 May 2022 08:16:58 GMT
- Title: Mind The Gap: Alleviating Local Imbalance for Unsupervised
Cross-Modality Medical Image Segmentation
- Authors: Zixian Su, Kai Yao, Xi Yang, Qiufeng Wang, Yuyao Yan and Kaizhu Huang
- Abstract summary: Cross-modality medical image adaptation aims to alleviate the severe domain gap between different imaging modalities.
One common attempt is to enforce the global alignment between two domains.
We propose a novel strategy to alleviate the domain gap imbalance considering the characteristics of medical images.
- Score: 18.75307816987653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised cross-modality medical image adaptation aims to alleviate the
severe domain gap between different imaging modalities without using the target
domain label. A key in this campaign relies upon aligning the distributions of
source and target domain. One common attempt is to enforce the global alignment
between two domains, which, however, ignores the fatal local-imbalance domain
gap problem, i.e., some local features with larger domain gap are harder to
transfer. Recently, some methods conduct alignment focusing on local regions to
improve the efficiency of model learning. While this operation may cause a
deficiency of critical information from contexts. To tackle this limitation, we
propose a novel strategy to alleviate the domain gap imbalance considering the
characteristics of medical images, namely Global-Local Union Alignment.
Specifically, a feature-disentanglement style-transfer module first synthesizes
the target-like source-content images to reduce the global domain gap. Then, a
local feature mask is integrated to reduce the 'inter-gap' for local features
by prioritizing those discriminative features with larger domain gap. This
combination of global and local alignment can precisely localize the crucial
regions in segmentation target while preserving the overall semantic
consistency. We conduct a series of experiments with two cross-modality
adaptation tasks, i,e. cardiac substructure and abdominal multi-organ
segmentation. Experimental results indicate that our method exceeds the SOTA
methods by 3.92% Dice score in MRI-CT cardiac segmentation and 3.33% in the
reverse direction.
Related papers
- BTMuda: A Bi-level Multi-source unsupervised domain adaptation framework for breast cancer diagnosis [16.016147407064654]
We construct a Bi-level Multi-source unsupervised domain adaptation method called BTMuda for breast cancer diagnosis.
Our method addresses the problems of domain shift by dividing domain shift issues into two levels: intra-domain and inter-domain.
Our method outperforms state-of-the-art methods in experiments on three public mammographic datasets.
arXiv Detail & Related papers (2024-08-30T07:25:53Z) - Unsupervised Domain Adaptation with Variational Approximation for
Cardiac Segmentation [15.2292571922932]
Unsupervised domain adaptation is useful in medical image segmentation.
We propose a new framework, where the latent features of both domains are driven towards a common and parameterized variational form.
This is achieved by two networks based on variational auto-encoders (VAEs) and a regularization for this variational approximation.
arXiv Detail & Related papers (2021-06-16T13:00:39Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - MLAN: Multi-Level Adversarial Network for Domain Adaptive Semantic
Segmentation [32.77436219094282]
This paper presents a novel multi-level adversarial network (MLAN) that aims to address inter-domain inconsistency at both global image level and local region level optimally.
MLAN has two novel designs, namely, region-level adversarial learning (RL-AL) and co-regularized adversarial learning (CR-AL)
Extensive experiments show that MLAN outperforms the state-of-the-art with a large margin consistently across multiple datasets.
arXiv Detail & Related papers (2021-03-24T05:13:23Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.