Generalizable Cross-modality Medical Image Segmentation via Style
Augmentation and Dual Normalization
- URL: http://arxiv.org/abs/2112.11177v1
- Date: Tue, 21 Dec 2021 13:18:46 GMT
- Title: Generalizable Cross-modality Medical Image Segmentation via Style
Augmentation and Dual Normalization
- Authors: Ziqi Zhou, Lei Qi, Xin Yang, Dong Ni, Yinghuan Shi
- Abstract summary: We propose a novel dual-normalization module by leveraging the augmented source-similar and source-dissimilar images.
Our method outperforms other state-of-the-art domain generalization methods.
- Score: 29.470385509955687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For medical image segmentation, imagine if a model was only trained using MR
images in source domain, how about its performance to directly segment CT
images in target domain? This setting, namely generalizable cross-modality
segmentation, owning its clinical potential, is much more challenging than
other related settings, e.g., domain adaptation. To achieve this goal, we in
this paper propose a novel dual-normalization module by leveraging the
augmented source-similar and source-dissimilar images during our generalizable
segmentation. To be specific, given a single source domain, aiming to simulate
the possible appearance change in unseen target domains, we first utilize a
nonlinear transformation to augment source-similar and source-dissimilar
images. Then, to sufficiently exploit these two types of augmentations, our
proposed dual-normalization based model employs a shared backbone yet
independent batch normalization layer for separate normalization. Afterwards,
we put forward a style-based selection scheme to automatically choose the
appropriate path in the test stage. Extensive experiments on three publicly
available datasets, i.e., BraTS, Cross-Modality Cardiac and Abdominal
Multi-Organ dataset, have demonstrated that our method outperforms other
state-of-the-art domain generalization methods.
Related papers
- Generalizable Single-Source Cross-modality Medical Image Segmentation via Invariant Causal Mechanisms [16.699205051836657]
Single-source domain generalization aims to learn a model from a single source domain that can generalize well on unseen target domains.
This is an important task in computer vision, particularly relevant to medical imaging where domain shifts are common.
We combine causality-inspired theoretical insights on learning domain-invariant representations with recent advancements in diffusion-based augmentation to improve generalization across diverse imaging modalities.
arXiv Detail & Related papers (2024-11-07T22:35:17Z) - DG-TTA: Out-of-domain medical image segmentation through Domain Generalization and Test-Time Adaptation [43.842694540544194]
We propose to combine domain generalization and test-time adaptation to create a highly effective approach for reusing pre-trained models in unseen target domains.
We demonstrate that our method, combined with pre-trained whole-body CT models, can effectively segment MR images with high accuracy.
arXiv Detail & Related papers (2023-12-11T10:26:21Z) - A Simple and Robust Framework for Cross-Modality Medical Image
Segmentation applied to Vision Transformers [0.0]
We propose a simple framework to achieve fair image segmentation of multiple modalities using a single conditional model.
We show that our framework outperforms other cross-modality segmentation methods on the Multi-Modality Whole Heart Conditional Challenge.
arXiv Detail & Related papers (2023-10-09T09:51:44Z) - Federated Domain Generalization for Image Recognition via Cross-Client
Style Transfer [60.70102634957392]
Domain generalization (DG) has been a hot topic in image recognition, with a goal to train a general model that can perform well on unseen domains.
In this paper, we propose a novel domain generalization method for image recognition through cross-client style transfer (CCST) without exchanging data samples.
Our method outperforms recent SOTA DG methods on two DG benchmarks (PACS, OfficeHome) and a large-scale medical image dataset (Camelyon17) in the FL setting.
arXiv Detail & Related papers (2022-10-03T13:15:55Z) - Generalizable Medical Image Segmentation via Random Amplitude Mixup and
Domain-Specific Image Restoration [17.507951655445652]
We present a novel generalizable medical image segmentation method.
To be specific, we design our approach as a multi-task paradigm by combining the segmentation model with a self-supervision domain-specific image restoration module.
We demonstrate the performance of our method on two public generalizable segmentation benchmarks in medical images.
arXiv Detail & Related papers (2022-08-08T03:56:20Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Realistic Image Normalization for Multi-Domain Segmentation [7.856339385917824]
This paper revisits the conventional image normalization approach by instead learning a common normalizing function across multiple datasets.
Jointly normalizing multiple datasets is shown to yield consistent normalized images as well as an improved image segmentation.
Our method can also enhance data availability by increasing the number of samples available when learning from multiple imaging domains.
arXiv Detail & Related papers (2020-09-29T13:57:04Z) - TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation [82.52514546441247]
We propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.
Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style and the content.
We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-04-19T05:07:22Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.