MI-SegNet: Mutual Information-Based US Segmentation for Unseen Domain
Generalization
- URL: http://arxiv.org/abs/2303.12649v3
- Date: Tue, 6 Feb 2024 16:55:14 GMT
- Title: MI-SegNet: Mutual Information-Based US Segmentation for Unseen Domain
Generalization
- Authors: Yuan Bi, Zhongliang Jiang, Ricarda Clarenbach, Reza Ghotbi, Angelos
Karlas, Nassir Navab
- Abstract summary: Generalization capabilities of learning-based medical image segmentation across domains are currently limited by the performance degradation caused by the domain shift.
We propose MI-SegNet, a novel mutual information (MI) based framework to explicitly disentangle the anatomical and domain feature representations.
We validate the generalizability of the proposed domain-independent segmentation approach on several datasets with varying parameters and machines.
- Score: 36.71630929695019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalization capabilities of learning-based medical image segmentation
across domains are currently limited by the performance degradation caused by
the domain shift, particularly for ultrasound (US) imaging. The quality of US
images heavily relies on carefully tuned acoustic parameters, which vary across
sonographers, machines, and settings. To improve the generalizability on US
images across domains, we propose MI-SegNet, a novel mutual information (MI)
based framework to explicitly disentangle the anatomical and domain feature
representations; therefore, robust domain-independent segmentation can be
expected. Two encoders are employed to extract the relevant features for the
disentanglement. The segmentation only uses the anatomical feature map for its
prediction. In order to force the encoders to learn meaningful feature
representations a cross-reconstruction method is used during training.
Transformations, specific to either domain or anatomy are applied to guide the
encoders in their respective feature extraction task. Additionally, any MI
present in both feature maps is punished to further promote separate feature
spaces. We validate the generalizability of the proposed domain-independent
segmentation approach on several datasets with varying parameters and machines.
Furthermore, we demonstrate the effectiveness of the proposed MI-SegNet serving
as a pre-trained model by comparing it with state-of-the-art networks.
Related papers
- Language Guided Domain Generalized Medical Image Segmentation [68.93124785575739]
Single source domain generalization holds promise for more reliable and consistent image segmentation across real-world clinical settings.
We propose an approach that explicitly leverages textual information by incorporating a contrastive learning mechanism guided by the text encoder features.
Our approach achieves favorable performance against existing methods in literature.
arXiv Detail & Related papers (2024-04-01T17:48:15Z) - Unsupervised Federated Domain Adaptation for Segmentation of MRI Images [20.206972068340843]
We develop a method for unsupervised federated domain adaptation using multiple annotated source domains.
Our approach enables the transfer of knowledge from several annotated source domains to adapt a model for effective use in an unannotated target domain.
arXiv Detail & Related papers (2024-01-02T00:31:41Z) - Multi-Scale Multi-Target Domain Adaptation for Angle Closure
Classification [50.658613573816254]
We propose a novel Multi-scale Multi-target Domain Adversarial Network (M2DAN) for angle closure classification.
Based on these domain-invariant features at different scales, the deep model trained on the source domain is able to classify angle closure on multiple target domains.
arXiv Detail & Related papers (2022-08-25T15:27:55Z) - Generalizable Medical Image Segmentation via Random Amplitude Mixup and
Domain-Specific Image Restoration [17.507951655445652]
We present a novel generalizable medical image segmentation method.
To be specific, we design our approach as a multi-task paradigm by combining the segmentation model with a self-supervision domain-specific image restoration module.
We demonstrate the performance of our method on two public generalizable segmentation benchmarks in medical images.
arXiv Detail & Related papers (2022-08-08T03:56:20Z) - Contrastive Domain Disentanglement for Generalizable Medical Image
Segmentation [12.863227646939563]
We propose Contrastive Disentangle Domain (CDD) network for generalizable medical image segmentation.
We first introduce a disentangle network to decompose medical images into an anatomical representation factor and a modality representation factor.
We then propose a domain augmentation strategy that can randomly generate new domains for model generalization training.
arXiv Detail & Related papers (2022-05-13T10:32:41Z) - Multi-Task, Multi-Domain Deep Segmentation with Shared Representations
and Contrastive Regularization for Sparse Pediatric Datasets [0.5249805590164902]
We propose to train a segmentation model on multiple datasets, arising from different parts of the anatomy, in a multi-task and multi-domain learning framework.
The proposed segmentation network comprises shared convolutional filters, domain-specific batch normalization parameters that compute the respective dataset statistics.
We evaluate our contributions on two pediatric imaging datasets of the ankle and shoulder joints for bone segmentation.
arXiv Detail & Related papers (2021-05-21T12:26:05Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.