Modality-Aware and Shift Mixer for Multi-modal Brain Tumor Segmentation
- URL: http://arxiv.org/abs/2403.02074v1
- Date: Mon, 4 Mar 2024 14:21:51 GMT
- Title: Modality-Aware and Shift Mixer for Multi-modal Brain Tumor Segmentation
- Authors: Zhongzhen Huang, Linda Wei, Shaoting Zhang, Xiaofan Zhang
- Abstract summary: We present a novel Modality Aware and Shift Mixer that integrates intra-modality and inter-modality dependencies of multi-modal images for effective and robust brain tumor segmentation.
Specifically, we introduce a Modality-Aware module according to neuroimaging studies for modeling the specific modality pair relationships at low levels, and a Modality-Shift module with specific mosaic patterns is developed to explore the complex relationships across modalities at high levels via the self-attention.
- Score: 12.094890186803958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Combining images from multi-modalities is beneficial to explore various
information in computer vision, especially in the medical domain. As an
essential part of clinical diagnosis, multi-modal brain tumor segmentation aims
to delineate the malignant entity involving multiple modalities. Although
existing methods have shown remarkable performance in the task, the information
exchange for cross-scale and high-level representations fusion in spatial and
modality are limited in these methods. In this paper, we present a novel
Modality Aware and Shift Mixer that integrates intra-modality and
inter-modality dependencies of multi-modal images for effective and robust
brain tumor segmentation. Specifically, we introduce a Modality-Aware module
according to neuroimaging studies for modeling the specific modality pair
relationships at low levels, and a Modality-Shift module with specific mosaic
patterns is developed to explore the complex relationships across modalities at
high levels via the self-attention. Experimentally, we outperform previous
state-of-the-art approaches on the public Brain Tumor Segmentation (BraTS 2021
segmentation) dataset. Further qualitative experiments demonstrate the efficacy
and robustness of MASM.
Related papers
- Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Joint Self-Supervised and Supervised Contrastive Learning for Multimodal
MRI Data: Towards Predicting Abnormal Neurodevelopment [5.771221868064265]
We present a novel joint self-supervised and supervised contrastive learning method to learn the robust latent feature representation from multimodal MRI data.
Our method has the capability to facilitate computer-aided diagnosis within clinical practice, harnessing the power of multimodal data.
arXiv Detail & Related papers (2023-12-22T21:05:51Z) - Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation [1.815047691981538]
We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
arXiv Detail & Related papers (2023-06-06T14:48:50Z) - Exploiting Partial Common Information Microstructure for Multi-Modal
Brain Tumor Segmentation [11.583406152227637]
Learning with multiple modalities is crucial for automated brain tumor segmentation from magnetic resonance imaging data.
Existing approaches are oblivious to partial common information shared by subsets of the modalities.
In this paper, we show that identifying such partial common information can significantly boost the discriminative power of image segmentation models.
arXiv Detail & Related papers (2023-02-06T01:28:52Z) - Multimodal foundation models are better simulators of the human brain [65.10501322822881]
We present a newly-designed multimodal foundation model pre-trained on 15 million image-text pairs.
We find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.
arXiv Detail & Related papers (2022-08-17T12:36:26Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.