Exploiting Partial Common Information Microstructure for Multi-Modal
Brain Tumor Segmentation
- URL: http://arxiv.org/abs/2302.02521v2
- Date: Fri, 14 Jul 2023 23:49:10 GMT
- Title: Exploiting Partial Common Information Microstructure for Multi-Modal
Brain Tumor Segmentation
- Authors: Yongsheng Mei, Guru Venkataramani, and Tian Lan
- Abstract summary: Learning with multiple modalities is crucial for automated brain tumor segmentation from magnetic resonance imaging data.
Existing approaches are oblivious to partial common information shared by subsets of the modalities.
In this paper, we show that identifying such partial common information can significantly boost the discriminative power of image segmentation models.
- Score: 11.583406152227637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning with multiple modalities is crucial for automated brain tumor
segmentation from magnetic resonance imaging data. Explicitly optimizing the
common information shared among all modalities (e.g., by maximizing the total
correlation) has been shown to achieve better feature representations and thus
enhance the segmentation performance. However, existing approaches are
oblivious to partial common information shared by subsets of the modalities. In
this paper, we show that identifying such partial common information can
significantly boost the discriminative power of image segmentation models. In
particular, we introduce a novel concept of partial common information mask
(PCI-mask) to provide a fine-grained characterization of what partial common
information is shared by which subsets of the modalities. By solving a masked
correlation maximization and simultaneously learning an optimal PCI-mask, we
identify the latent microstructure of partial common information and leverage
it in a self-attention module to selectively weight different feature
representations in multi-modal data. We implement our proposed framework on the
standard U-Net. Our experimental results on the Multi-modal Brain Tumor
Segmentation Challenge (BraTS) datasets outperform those of state-of-the-art
segmentation baselines, with validation Dice similarity coefficients of 0.920,
0.897, 0.837 for the whole tumor, tumor core, and enhancing tumor on
BraTS-2020.
Related papers
- Modality-Aware and Shift Mixer for Multi-modal Brain Tumor Segmentation [12.094890186803958]
We present a novel Modality Aware and Shift Mixer that integrates intra-modality and inter-modality dependencies of multi-modal images for effective and robust brain tumor segmentation.
Specifically, we introduce a Modality-Aware module according to neuroimaging studies for modeling the specific modality pair relationships at low levels, and a Modality-Shift module with specific mosaic patterns is developed to explore the complex relationships across modalities at high levels via the self-attention.
arXiv Detail & Related papers (2024-03-04T14:21:51Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - mmFormer: Multimodal Medical Transformer for Incomplete Multimodal
Learning of Brain Tumor Segmentation [38.22852533584288]
We propose a novel Medical Transformer (mmFormer) for incomplete multimodal learning with three main components.
The proposed mmFormer outperforms the state-of-the-art methods for incomplete multimodal brain tumor segmentation on almost all subsets of incomplete modalities.
arXiv Detail & Related papers (2022-06-06T08:41:56Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Latent Correlation Representation Learning for Brain Tumor Segmentation
with Missing MRI Modalities [2.867517731896504]
Accurately segmenting brain tumor from MR images is the key to clinical diagnostics and treatment planning.
It's common to miss some imaging modalities in clinical practice.
We present a novel brain tumor segmentation algorithm with missing modalities.
arXiv Detail & Related papers (2021-04-13T14:21:09Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data [2.2515303891664358]
Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods.
We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches.
We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans.
arXiv Detail & Related papers (2020-08-28T09:15:42Z) - Brain tumor segmentation with missing modalities via latent multi-source
correlation representation [6.060020806741279]
A novel correlation representation block is proposed to specially discover the latent multi-source correlation.
Thanks to the obtained correlation representation, the segmentation becomes more robust in the case of missing modalities.
We evaluate our model on BraTS 2018 datasets, it outperforms the current state-of-the-art method and produces robust results when one or more modalities are missing.
arXiv Detail & Related papers (2020-03-19T15:47:36Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.