Generalizable Medical Image Segmentation via Random Amplitude Mixup and
Domain-Specific Image Restoration
- URL: http://arxiv.org/abs/2208.03901v1
- Date: Mon, 8 Aug 2022 03:56:20 GMT
- Title: Generalizable Medical Image Segmentation via Random Amplitude Mixup and
Domain-Specific Image Restoration
- Authors: Ziqi Zhou, Lei Qi, Yinghuan Shi
- Abstract summary: We present a novel generalizable medical image segmentation method.
To be specific, we design our approach as a multi-task paradigm by combining the segmentation model with a self-supervision domain-specific image restoration module.
We demonstrate the performance of our method on two public generalizable segmentation benchmarks in medical images.
- Score: 17.507951655445652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For medical image analysis, segmentation models trained on one or several
domains lack generalization ability to unseen domains due to discrepancies
between different data acquisition policies. We argue that the degeneration in
segmentation performance is mainly attributed to overfitting to source domains
and domain shift. To this end, we present a novel generalizable medical image
segmentation method. To be specific, we design our approach as a multi-task
paradigm by combining the segmentation model with a self-supervision
domain-specific image restoration (DSIR) module for model regularization. We
also design a random amplitude mixup (RAM) module, which incorporates low-level
frequency information of different domain images to synthesize new images. To
guide our model be resistant to domain shift, we introduce a semantic
consistency loss. We demonstrate the performance of our method on two public
generalizable segmentation benchmarks in medical images, which validates our
method could achieve the state-of-the-art performance.
Related papers
- Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation [49.5901368256326]
We propose a novel Domain-Adaptive Prompt framework for fine-tuning the Segment Anything Model (termed as DAPSAM) in segmenting medical images.
Our DAPSAM achieves state-of-the-art performance on two medical image segmentation tasks with different modalities.
arXiv Detail & Related papers (2024-09-19T07:28:33Z) - Grounding Stylistic Domain Generalization with Quantitative Domain Shift Measures and Synthetic Scene Images [63.58800688320182]
Domain Generalization is a challenging task in machine learning.
Current methodology lacks quantitative understanding about shifts in stylistic domain.
We introduce a new DG paradigm to address these risks.
arXiv Detail & Related papers (2024-05-24T22:13:31Z) - CDDSA: Contrastive Domain Disentanglement and Style Augmentation for
Generalizable Medical Image Segmentation [38.44458104455557]
We propose an efficient Contrastive Domain Disentanglement and Style Augmentation (CDDSA) framework for generalizable medical image segmentation.
First, a disentangle network is proposed to decompose an image into a domain-invariant anatomical representation and a domain-specific style code.
Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent.
arXiv Detail & Related papers (2022-11-22T08:25:35Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Contrastive Domain Disentanglement for Generalizable Medical Image
Segmentation [12.863227646939563]
We propose Contrastive Disentangle Domain (CDD) network for generalizable medical image segmentation.
We first introduce a disentangle network to decompose medical images into an anatomical representation factor and a modality representation factor.
We then propose a domain augmentation strategy that can randomly generate new domains for model generalization training.
arXiv Detail & Related papers (2022-05-13T10:32:41Z) - Generalizable Cross-modality Medical Image Segmentation via Style
Augmentation and Dual Normalization [29.470385509955687]
We propose a novel dual-normalization module by leveraging the augmented source-similar and source-dissimilar images.
Our method outperforms other state-of-the-art domain generalization methods.
arXiv Detail & Related papers (2021-12-21T13:18:46Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Domain Generalizer: A Few-shot Meta Learning Framework for Domain
Generalization in Medical Imaging [23.414905586808874]
We adapt a domain generalization method based on a model-agnostic meta-learning framework to biomedical imaging.
The method learns a domain-agnostic feature representation to improve generalization of models to the unseen test distribution.
Our results suggest that the method could help generalize models across different medical centers, image acquisition protocols, anatomies, different regions in a given scan, healthy and diseased populations across varied imaging modalities.
arXiv Detail & Related papers (2020-08-18T03:35:56Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Generalizable Model-agnostic Semantic Segmentation via Target-specific
Normalization [24.14272032117714]
We propose a novel domain generalization framework for the generalizable semantic segmentation task.
We exploit the model-agnostic learning to simulate the domain shift problem.
Considering the data-distribution discrepancy between seen source and unseen target domains, we develop the target-specific normalization scheme.
arXiv Detail & Related papers (2020-03-27T09:25:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.