Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion
- URL: http://arxiv.org/abs/2002.09708v1
- Date: Sat, 22 Feb 2020 14:32:04 GMT
- Title: Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion
- Authors: Cheng Chen, Qi Dou, Yueming Jin, Hao Chen, Jing Qin, Pheng-Ann Heng
- Abstract summary: We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
- Score: 71.87627318863612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate medical image segmentation commonly requires effective learning of
the complementary information from multimodal data. However, in clinical
practice, we often encounter the problem of missing imaging modalities. We
tackle this challenge and propose a novel multimodal segmentation framework
which is robust to the absence of imaging modalities. Our network uses feature
disentanglement to decompose the input modalities into the modality-specific
appearance code, which uniquely sticks to each modality, and the
modality-invariant content code, which absorbs multimodal information for the
segmentation task. With enhanced modality-invariance, the disentangled content
code from each modality is fused into a shared representation which gains
robustness to missing data. The fusion is achieved via a learning-based
strategy to gate the contribution of different modalities at different
locations. We validate our method on the important yet challenging multimodal
brain tumor segmentation task with the BRATS challenge dataset. With
competitive performance to the state-of-the-art approaches for full modality,
our method achieves outstanding robustness under various missing modality(ies)
situations, significantly exceeding the state-of-the-art method by over 16% in
average for Dice on whole tumor segmentation.
Related papers
- Robust Semi-supervised Multimodal Medical Image Segmentation via Cross Modality Collaboration [21.97457095780378]
We propose a novel semi-supervised multimodal segmentation framework that is robust to scarce labeled data and misaligned modalities.
Our framework employs a novel cross modality collaboration strategy to distill modality-independent knowledge, which is inherently associated with each modality.
It also integrates contrastive consistent learning to regulate anatomical structures, facilitating anatomical-wise prediction alignment on unlabeled data.
arXiv Detail & Related papers (2024-08-14T07:34:12Z) - Modality-Aware and Shift Mixer for Multi-modal Brain Tumor Segmentation [12.094890186803958]
We present a novel Modality Aware and Shift Mixer that integrates intra-modality and inter-modality dependencies of multi-modal images for effective and robust brain tumor segmentation.
Specifically, we introduce a Modality-Aware module according to neuroimaging studies for modeling the specific modality pair relationships at low levels, and a Modality-Shift module with specific mosaic patterns is developed to explore the complex relationships across modalities at high levels via the self-attention.
arXiv Detail & Related papers (2024-03-04T14:21:51Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation [1.815047691981538]
We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
arXiv Detail & Related papers (2023-06-06T14:48:50Z) - Multi-task Paired Masking with Alignment Modeling for Medical
Vision-Language Pre-training [55.56609500764344]
We propose a unified framework based on Multi-task Paired Masking with Alignment (MPMA) to integrate the cross-modal alignment task into the joint image-text reconstruction framework.
We also introduce a Memory-Augmented Cross-Modal Fusion (MA-CMF) module to fully integrate visual information to assist report reconstruction.
arXiv Detail & Related papers (2023-05-13T13:53:48Z) - Unified Multi-Modal Image Synthesis for Missing Modality Imputation [23.681228202899984]
We propose a novel unified multi-modal image synthesis method for missing modality imputation.
The proposed method is effective in handling various synthesis tasks and shows superior performance compared to previous methods.
arXiv Detail & Related papers (2023-04-11T16:59:15Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.