Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2105.07715v1
- Date: Mon, 17 May 2021 10:11:45 GMT
- Title: Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation
- Authors: Kelei He, Wen Ji, Tao Zhou, Zhuoyuan Li, Jing Huo, Xin Zhang, Yang
Gao, Dinggang Shen, Bing Zhang, and Junfeng Zhang
- Abstract summary: In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
- Score: 61.01704175938995
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate segmentation of brain tumors from multi-modal Magnetic Resonance
(MR) images is essential in brain tumor diagnosis and treatment. However, due
to the existence of domain shifts among different modalities, the performance
of networks decreases dramatically when training on one modality and performing
on another, e.g., train on T1 image while performing on T2 image, which is
often required in clinical applications. This also prohibits a network from
being trained on labeled data and then transferred to unlabeled data from a
different domain. To overcome this, unsupervised domain adaptation (UDA)
methods provide effective solutions to alleviate the domain shift between
labeled source data and unlabeled target data. In this paper, we propose a
novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA
scheme. Specifically, a bidirectional image synthesis and segmentation module
is proposed to segment the brain tumor using the intermediate data
distributions generated for the two domains, which includes an image-to-image
translator and a shared-weighted segmentation network. Further, a
global-to-local consistency learning module is proposed to build robust
representation alignments in an integrated way. Extensive experiments on a
multi-modal brain MR benchmark dataset demonstrate that the proposed method
outperforms several state-of-the-art unsupervised domain adaptation methods by
a large margin, while a comprehensive ablation study validates the
effectiveness of each key component. The implementation code of our method will
be released at \url{https://github.com/KeleiHe/BiGL}.
Related papers
- Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Unsupervised Domain Adaptation for Cross-Modality Retinal Vessel
Segmentation via Disentangling Representation Style Transfer and
Collaborative Consistency Learning [3.9562534927482704]
We propose DCDA, a novel cross-modality unsupervised domain adaptation framework for tasks with large domain shifts.
Our framework achieves Dice scores close to target-trained oracle both from OCTA to OCT and from OCT to OCTA, significantly outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-01-13T07:03:16Z) - Unsupervised Domain Adaptation with Semantic Consistency across
Heterogeneous Modalities for MRI Prostate Lesion Segmentation [19.126306953075275]
We introduce two new loss functions that promote semantic consistency.
In particular, we address the challenge of enhancing performance on VERDICT-MRI, an advanced diffusion-weighted imaging technique.
arXiv Detail & Related papers (2021-09-19T17:33:26Z) - Unsupervised Domain Adaptation with Variational Approximation for
Cardiac Segmentation [15.2292571922932]
Unsupervised domain adaptation is useful in medical image segmentation.
We propose a new framework, where the latent features of both domains are driven towards a common and parameterized variational form.
This is achieved by two networks based on variational auto-encoders (VAEs) and a regularization for this variational approximation.
arXiv Detail & Related papers (2021-06-16T13:00:39Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.