Contrastive Domain Disentanglement for Generalizable Medical Image
Segmentation
- URL: http://arxiv.org/abs/2205.06551v1
- Date: Fri, 13 May 2022 10:32:41 GMT
- Title: Contrastive Domain Disentanglement for Generalizable Medical Image
Segmentation
- Authors: Ran Gu, Jiangshan Lu, Jingyang Zhang, Wenhui Lei, Xiaofan Zhang,
Guotai Wang, Shaoting Zhang
- Abstract summary: We propose Contrastive Disentangle Domain (CDD) network for generalizable medical image segmentation.
We first introduce a disentangle network to decompose medical images into an anatomical representation factor and a modality representation factor.
We then propose a domain augmentation strategy that can randomly generate new domains for model generalization training.
- Score: 12.863227646939563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficiently utilizing discriminative features is crucial for convolutional
neural networks to achieve remarkable performance in medical image segmentation
and is also important for model generalization across multiple domains, where
letting model recognize domain-specific and domain-invariant information among
multi-site datasets is a reasonable strategy for domain generalization.
Unfortunately, most of the recent disentangle networks are not directly
adaptable to unseen-domain datasets because of the limitations of offered data
distribution. To tackle this deficiency, we propose Contrastive Domain
Disentangle (CDD) network for generalizable medical image segmentation. We
first introduce a disentangle network to decompose medical images into an
anatomical representation factor and a modality representation factor. Then, a
style contrastive loss is proposed to encourage the modality representations
from the same domain to distribute as close as possible while different domains
are estranged from each other. Finally, we propose a domain augmentation
strategy that can randomly generate new domains for model generalization
training. Experimental results on multi-site fundus image datasets for optic
cup and disc segmentation show that the CDD has good model generalization. Our
proposed CDD outperforms several state-of-the-art methods in domain
generalizable segmentation.
Related papers
- CDDSA: Contrastive Domain Disentanglement and Style Augmentation for
Generalizable Medical Image Segmentation [38.44458104455557]
We propose an efficient Contrastive Domain Disentanglement and Style Augmentation (CDDSA) framework for generalizable medical image segmentation.
First, a disentangle network is proposed to decompose an image into a domain-invariant anatomical representation and a domain-specific style code.
Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent.
arXiv Detail & Related papers (2022-11-22T08:25:35Z) - Multi-Scale Multi-Target Domain Adaptation for Angle Closure
Classification [50.658613573816254]
We propose a novel Multi-scale Multi-target Domain Adversarial Network (M2DAN) for angle closure classification.
Based on these domain-invariant features at different scales, the deep model trained on the source domain is able to classify angle closure on multiple target domains.
arXiv Detail & Related papers (2022-08-25T15:27:55Z) - Generalizable Medical Image Segmentation via Random Amplitude Mixup and
Domain-Specific Image Restoration [17.507951655445652]
We present a novel generalizable medical image segmentation method.
To be specific, we design our approach as a multi-task paradigm by combining the segmentation model with a self-supervision domain-specific image restoration module.
We demonstrate the performance of our method on two public generalizable segmentation benchmarks in medical images.
arXiv Detail & Related papers (2022-08-08T03:56:20Z) - AADG: Automatic Augmentation for Domain Generalization on Retinal Image
Segmentation [1.0452185327816181]
We propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG)
Our AADG framework can effectively sample data augmentation policies that generate novel domains.
Our proposed AADG exhibits state-of-the-art generalization performance and outperforms existing approaches.
arXiv Detail & Related papers (2022-07-27T02:26:01Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Causality-inspired Single-source Domain Generalization for Medical Image
Segmentation [12.697945585457441]
We propose a simple data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples.
Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks.
We remove spurious correlations among objects in an image that might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains.
arXiv Detail & Related papers (2021-11-24T14:45:17Z) - Domain Composition and Attention for Unseen-Domain Generalizable Medical
Image Segmentation [12.412110592754729]
We propose a Domain Composition and Attention-based network (DCA-Net) to improve the ability of domain representation and generalization.
First, we present a domain composition method that represents one certain domain by a linear combination of a set of basis representations.
Second, a novel plug-and-play parallel domain is proposed to learn these basis representations.
Third, a domain attention module is proposed to learn the linear combination coefficients of the basis representations.
arXiv Detail & Related papers (2021-09-18T06:42:47Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.