Domain Composition and Attention for Unseen-Domain Generalizable Medical
Image Segmentation
- URL: http://arxiv.org/abs/2109.08852v1
- Date: Sat, 18 Sep 2021 06:42:47 GMT
- Title: Domain Composition and Attention for Unseen-Domain Generalizable Medical
Image Segmentation
- Authors: Ran Gu, Jingyang Zhang, Rui Huang, Wenhui Lei, Guotai Wang, Shaoting
Zhang
- Abstract summary: We propose a Domain Composition and Attention-based network (DCA-Net) to improve the ability of domain representation and generalization.
First, we present a domain composition method that represents one certain domain by a linear combination of a set of basis representations.
Second, a novel plug-and-play parallel domain is proposed to learn these basis representations.
Third, a domain attention module is proposed to learn the linear combination coefficients of the basis representations.
- Score: 12.412110592754729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalizable model is attracting increasing attention in medical
image analysis since data is commonly acquired from different institutes with
various imaging protocols and scanners. To tackle this challenging domain
generalization problem, we propose a Domain Composition and Attention-based
network (DCA-Net) to improve the ability of domain representation and
generalization. First, we present a domain composition method that represents
one certain domain by a linear combination of a set of basis representations
(i.e., a representation bank). Second, a novel plug-and-play parallel domain
preceptor is proposed to learn these basis representations and we introduce a
divergence constraint function to encourage the basis representations to be as
divergent as possible. Then, a domain attention module is proposed to learn the
linear combination coefficients of the basis representations. The result of
linear combination is used to calibrate the feature maps of an input image,
which enables the model to generalize to different and even unseen domains. We
validate our method on public prostate MRI dataset acquired from six different
institutions with apparent domain shift. Experimental results show that our
proposed model can generalize well on different and even unseen domains and it
outperforms state-of-the-art methods on the multi-domain prostate segmentation
task.
Related papers
- MI-SegNet: Mutual Information-Based US Segmentation for Unseen Domain
Generalization [36.71630929695019]
Generalization capabilities of learning-based medical image segmentation across domains are currently limited by the performance degradation caused by the domain shift.
We propose MI-SegNet, a novel mutual information (MI) based framework to explicitly disentangle the anatomical and domain feature representations.
We validate the generalizability of the proposed domain-independent segmentation approach on several datasets with varying parameters and machines.
arXiv Detail & Related papers (2023-03-22T15:30:44Z) - CDDSA: Contrastive Domain Disentanglement and Style Augmentation for
Generalizable Medical Image Segmentation [38.44458104455557]
We propose an efficient Contrastive Domain Disentanglement and Style Augmentation (CDDSA) framework for generalizable medical image segmentation.
First, a disentangle network is proposed to decompose an image into a domain-invariant anatomical representation and a domain-specific style code.
Second, to achieve better disentanglement, a contrastive loss is proposed to encourage the style codes from the same domain and different domains to be compact and divergent.
arXiv Detail & Related papers (2022-11-22T08:25:35Z) - Generalizable Medical Image Segmentation via Random Amplitude Mixup and
Domain-Specific Image Restoration [17.507951655445652]
We present a novel generalizable medical image segmentation method.
To be specific, we design our approach as a multi-task paradigm by combining the segmentation model with a self-supervision domain-specific image restoration module.
We demonstrate the performance of our method on two public generalizable segmentation benchmarks in medical images.
arXiv Detail & Related papers (2022-08-08T03:56:20Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Contrastive Domain Disentanglement for Generalizable Medical Image
Segmentation [12.863227646939563]
We propose Contrastive Disentangle Domain (CDD) network for generalizable medical image segmentation.
We first introduce a disentangle network to decompose medical images into an anatomical representation factor and a modality representation factor.
We then propose a domain augmentation strategy that can randomly generate new domains for model generalization training.
arXiv Detail & Related papers (2022-05-13T10:32:41Z) - Unsupervised Domain Generalization by Learning a Bridge Across Domains [78.855606355957]
Unsupervised Domain Generalization (UDG) setup has no training supervision in neither source nor target domains.
Our approach is based on self-supervised learning of a Bridge Across Domains (BrAD) - an auxiliary bridge domain accompanied by a set of semantics preserving visual (image-to-image) mappings to BrAD from each of the training domains.
We show how using an edge-regularized BrAD our approach achieves significant gains across multiple benchmarks and a range of tasks, including UDG, Few-shot UDA, and unsupervised generalization across multi-domain datasets.
arXiv Detail & Related papers (2021-12-04T10:25:45Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.