MAP: Domain Generalization via Meta-Learning on Anatomy-Consistent
Pseudo-Modalities
- URL: http://arxiv.org/abs/2309.01286v1
- Date: Sun, 3 Sep 2023 22:56:22 GMT
- Title: MAP: Domain Generalization via Meta-Learning on Anatomy-Consistent
Pseudo-Modalities
- Authors: Dewei Hu, Hao Li, Han Liu, Xing Yao, Jiacheng Wang and Ipek Oguz
- Abstract summary: We propose Meta learning on Anatomy-consistent Pseudo-modalities (MAP)
MAP improves model generalizability by learning structural features.
We evaluate our model on seven public datasets of various retinal imaging modalities.
- Score: 12.194439938007672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep models suffer from limited generalization capability to unseen domains,
which has severely hindered their clinical applicability. Specifically for the
retinal vessel segmentation task, although the model is supposed to learn the
anatomy of the target, it can be distracted by confounding factors like
intensity and contrast. We propose Meta learning on Anatomy-consistent
Pseudo-modalities (MAP), a method that improves model generalizability by
learning structural features. We first leverage a feature extraction network to
generate three distinct pseudo-modalities that share the vessel structure of
the original image. Next, we use the episodic learning paradigm by selecting
one of the pseudo-modalities as the meta-train dataset, and perform
meta-testing on a continuous augmented image space generated through Dirichlet
mixup of the remaining pseudo-modalities. Further, we introduce two loss
functions that facilitate the model's focus on shape information by clustering
the latent vectors obtained from images featuring identical vasculature. We
evaluate our model on seven public datasets of various retinal imaging
modalities and we conclude that MAP has substantially better generalizability.
Our code is publically available at https://github.com/DeweiHu/MAP.
Related papers
- HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis [19.04633470168871]
Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy.
In this paper, we propose a novel Hierarchical Adaptive Taxonomy (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights.
Our approach entails (1) the innovative HATs technique which translates spatial relationships among 15 distinct object classes into a versatile "plug-and-play" loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, and (3) the
arXiv Detail & Related papers (2024-06-30T05:35:26Z) - Overcoming Dimensional Collapse in Self-supervised Contrastive Learning
for Medical Image Segmentation [2.6764957223405657]
We investigate the application of contrastive learning to the domain of medical image analysis.
Our findings reveal that MoCo v2, a state-of-the-art contrastive learning method, encounters dimensional collapse when applied to medical images.
To address this, we propose two key contributions: local feature learning and feature decorrelation.
arXiv Detail & Related papers (2024-02-22T15:02:13Z) - VesselMorph: Domain-Generalized Retinal Vessel Segmentation via
Shape-Aware Representation [12.194439938007672]
Domain shift is an inherent property of medical images and has become a major obstacle for large-scale deployment of learning-based algorithms.
We propose a method named VesselMorph which generalizes the 2D retinal vessel segmentation task by synthesizing a shape-aware representation.
VesselMorph achieves superior generalization performance compared with competing methods in different domain shift scenarios.
arXiv Detail & Related papers (2023-07-01T06:02:22Z) - Learning with Explicit Shape Priors for Medical Image Segmentation [17.110893665132423]
We propose a novel shape prior module (SPM) to promote the segmentation performance of UNet-based models.
Explicit shape priors consist of global and local shape priors.
Our proposed model achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-31T11:12:35Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Predicting Scores of Medical Imaging Segmentation Methods with
Meta-Learning [0.30458514384586394]
We investigate meta-learning for segmentation across ten datasets of different organs and modalities.
We use support vector regression and deep neural networks to learn the relationship between the meta-features and prior model performance.
These results demonstrate the potential of meta-learning in medical imaging.
arXiv Detail & Related papers (2020-05-08T07:47:52Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.