Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains
- URL: http://arxiv.org/abs/2007.02035v1
- Date: Sat, 4 Jul 2020 07:56:02 GMT
- Title: Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains
- Authors: Quande Liu, Qi Dou, Pheng-Ann Heng
- Abstract summary: We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
- Score: 68.73614619875814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model generalization capacity at domain shift (e.g., various imaging
protocols and scanners) is crucial for deep learning methods in real-world
clinical deployment. This paper tackles the challenging problem of domain
generalization, i.e., learning a model from multi-domain source data such that
it can directly generalize to an unseen target domain. We present a novel
shape-aware meta-learning scheme to improve the model generalization in
prostate MRI segmentation. Our learning scheme roots in the gradient-based
meta-learning, by explicitly simulating domain shift with virtual meta-train
and meta-test during training. Importantly, considering the deficiencies
encountered when applying a segmentation model to unseen domains (i.e.,
incomplete shape and ambiguous boundary of the prediction masks), we further
introduce two complementary loss objectives to enhance the meta-optimization,
by particularly encouraging the shape compactness and shape smoothness of the
segmentations under simulated domain shift. We evaluate our method on prostate
MRI data from six different institutions with distribution shifts acquired from
public datasets. Experimental results show that our approach outperforms many
state-of-the-art generalization methods consistently across all six settings of
unseen domains.
Related papers
- Gradient-Map-Guided Adaptive Domain Generalization for Cross Modality
MRI Segmentation [14.209197648189203]
Cross-modal MRI segmentation is of great value for computer-aided medical diagnosis, enabling flexible data acquisition and model generalization.
Most existing methods have difficulty in handling local variations in domain shift and typically require a significant amount of data for training.
We propose a novel adaptive domain generalization framework, which integrates a learning-free cross-domain representation based on image gradient maps.
arXiv Detail & Related papers (2023-11-16T10:07:27Z) - Domain Generalization with Adversarial Intensity Attack for Medical
Image Segmentation [27.49427483473792]
In real-world scenarios, it is common for models to encounter data from new and different domains to which they were not exposed to during training.
domain generalization (DG) is a promising direction as it enables models to handle data from previously unseen domains.
We introduce a novel DG method called Adversarial Intensity Attack (AdverIN), which leverages adversarial training to generate training data with an infinite number of styles.
arXiv Detail & Related papers (2023-04-05T19:40:51Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Contrastive Domain Disentanglement for Generalizable Medical Image
Segmentation [12.863227646939563]
We propose Contrastive Disentangle Domain (CDD) network for generalizable medical image segmentation.
We first introduce a disentangle network to decompose medical images into an anatomical representation factor and a modality representation factor.
We then propose a domain augmentation strategy that can randomly generate new domains for model generalization training.
arXiv Detail & Related papers (2022-05-13T10:32:41Z) - Semi-supervised Meta-learning with Disentanglement for
Domain-generalised Medical Image Segmentation [15.351113774542839]
Generalising models to new data from new centres (termed here domains) remains a challenge.
We propose a novel semi-supervised meta-learning framework with disentanglement.
We show that the proposed method is robust on different segmentation tasks and achieves state-of-the-art generalisation performance on two public benchmarks.
arXiv Detail & Related papers (2021-06-24T19:50:07Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - Domain Generalizer: A Few-shot Meta Learning Framework for Domain
Generalization in Medical Imaging [23.414905586808874]
We adapt a domain generalization method based on a model-agnostic meta-learning framework to biomedical imaging.
The method learns a domain-agnostic feature representation to improve generalization of models to the unseen test distribution.
Our results suggest that the method could help generalize models across different medical centers, image acquisition protocols, anatomies, different regions in a given scan, healthy and diseased populations across varied imaging modalities.
arXiv Detail & Related papers (2020-08-18T03:35:56Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.