Enhancing Modality-Agnostic Representations via Meta-Learning for Brain
Tumor Segmentation
- URL: http://arxiv.org/abs/2302.04308v2
- Date: Tue, 22 Aug 2023 05:23:21 GMT
- Title: Enhancing Modality-Agnostic Representations via Meta-Learning for Brain
Tumor Segmentation
- Authors: Aishik Konwer, Xiaoling Hu, Joseph Bae, Xuan Xu, Chao Chen, Prateek
Prasanna
- Abstract summary: We propose a novel approach to learn enhanced modality-agnostic representations by employing a meta-learning strategy in training.
Our framework significantly outperforms state-of-the-art brain tumor segmentation techniques in missing modality scenarios.
- Score: 16.747365311040863
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In medical vision, different imaging modalities provide complementary
information. However, in practice, not all modalities may be available during
inference or even training. Previous approaches, e.g., knowledge distillation
or image synthesis, often assume the availability of full modalities for all
patients during training; this is unrealistic and impractical due to the
variability in data collection across sites. We propose a novel approach to
learn enhanced modality-agnostic representations by employing a meta-learning
strategy in training, even when only limited full modality samples are
available. Meta-learning enhances partial modality representations to full
modality representations by meta-training on partial modality data and
meta-testing on limited full modality samples. Additionally, we co-supervise
this feature enrichment by introducing an auxiliary adversarial learning
branch. More specifically, a missing modality detector is used as a
discriminator to mimic the full modality setting. Our segmentation framework
significantly outperforms state-of-the-art brain tumor segmentation techniques
in missing modality scenarios.
Related papers
- Dealing with All-stage Missing Modality: Towards A Universal Model with Robust Reconstruction and Personalization [14.606035444283984]
Current approaches focus on developing models that handle modality-incomplete inputs during inference.
We propose a robust universal model with modality reconstruction and model personalization.
Our method has been extensively validated on two brain tumor segmentation benchmarks.
arXiv Detail & Related papers (2024-06-04T06:07:24Z) - Continual Self-supervised Learning: Towards Universal Multi-modal
Medical Data Representation Learning [36.33882718631217]
Self-supervised learning is an efficient pre-training method for medical image analysis.
We propose MedCoSS, a continuous self-supervised learning approach for multi-modal medical data.
We conduct continuous self-supervised pre-training on a large-scale multi-modal unlabeled dataset.
arXiv Detail & Related papers (2023-11-29T12:47:42Z) - DIGEST: Deeply supervIsed knowledGE tranSfer neTwork learning for brain
tumor segmentation with incomplete multi-modal MRI scans [16.93394669748461]
Brain tumor segmentation based on multi-modal magnetic resonance imaging (MRI) plays a pivotal role in assisting brain cancer diagnosis, treatment, and postoperative evaluations.
Despite the achieved inspiring performance by existing automatic segmentation methods, multi-modal MRI data are still unavailable in real-world clinical applications.
We propose a Deeply supervIsed knowledGE tranSfer neTwork (DIGEST), which achieves accurate brain tumor segmentation under different modality-missing scenarios.
arXiv Detail & Related papers (2022-11-15T09:01:14Z) - Exploiting modality-invariant feature for robust multimodal emotion
recognition with missing modalities [76.08541852988536]
We propose to use invariant features for a missing modality imagination network (IF-MMIN)
We show that the proposed model outperforms all baselines and invariantly improves the overall emotion recognition performance under uncertain missing-modality conditions.
arXiv Detail & Related papers (2022-10-27T12:16:25Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - ACN: Adversarial Co-training Network for Brain Tumor Segmentation with
Missing Modalities [26.394130795896704]
We propose a novel Adversarial Co-training Network (ACN) to solve this issue.
ACN enables a coupled learning process for both full modality and missing modality to supplement each other's domain.
Our proposed method significantly outperforms all state-of-the-art methods under any missing situation.
arXiv Detail & Related papers (2021-06-28T11:53:11Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.