Prototype-Driven and Multi-Expert Integrated Multi-Modal MR Brain Tumor
Image Segmentation
- URL: http://arxiv.org/abs/2307.12180v1
- Date: Sat, 22 Jul 2023 22:41:11 GMT
- Title: Prototype-Driven and Multi-Expert Integrated Multi-Modal MR Brain Tumor
Image Segmentation
- Authors: Yafei Zhang, Zhiyuan Li, Huafeng Li, Dapeng Tao
- Abstract summary: The impact of information aliasing caused by the mutual inclusion of tumor sub-regions is often ignored.
Existing methods usually do not take tailored efforts to highlight the single tumor sub-region features.
We propose a multi-modal MR brain tumor segmentation method with tumor prototype-driven and multi-expert integration.
- Score: 23.127213250062425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For multi-modal magnetic resonance (MR) brain tumor image segmentation,
current methods usually directly extract the discriminative features from input
images for tumor sub-region category determination and localization. However,
the impact of information aliasing caused by the mutual inclusion of tumor
sub-regions is often ignored. Moreover, existing methods usually do not take
tailored efforts to highlight the single tumor sub-region features. To this
end, a multi-modal MR brain tumor segmentation method with tumor
prototype-driven and multi-expert integration is proposed. It could highlight
the features of each tumor sub-region under the guidance of tumor prototypes.
Specifically, to obtain the prototypes with complete information, we propose a
mutual transmission mechanism to transfer different modal features to each
other to address the issues raised by insufficient information on single-modal
features. Furthermore, we devise a prototype-driven feature representation and
fusion method with the learned prototypes, which implants the prototypes into
tumor features and generates corresponding activation maps. With the activation
maps, the sub-region features consistent with the prototype category can be
highlighted. A key information enhancement and fusion strategy with
multi-expert integration is designed to further improve the segmentation
performance. The strategy can integrate the features from different layers of
the extra feature extraction network and the features highlighted by the
prototypes. Experimental results on three competition brain tumor segmentation
datasets prove the superiority of the proposed method.
Related papers
- multiPI-TransBTS: A Multi-Path Learning Framework for Brain Tumor Image Segmentation Based on Multi-Physical Information [1.7359724605901228]
Brain Tumor distances (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors.
Due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task.
We propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy.
arXiv Detail & Related papers (2024-09-18T17:35:19Z) - Modality-Aware and Shift Mixer for Multi-modal Brain Tumor Segmentation [12.094890186803958]
We present a novel Modality Aware and Shift Mixer that integrates intra-modality and inter-modality dependencies of multi-modal images for effective and robust brain tumor segmentation.
Specifically, we introduce a Modality-Aware module according to neuroimaging studies for modeling the specific modality pair relationships at low levels, and a Modality-Shift module with specific mosaic patterns is developed to explore the complex relationships across modalities at high levels via the self-attention.
arXiv Detail & Related papers (2024-03-04T14:21:51Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Prediction of brain tumor recurrence location based on multi-modal
fusion and nonlinear correlation learning [55.789874096142285]
We present a deep learning-based brain tumor recurrence location prediction network.
We first train a multi-modal brain tumor segmentation network on the public dataset BraTS 2021.
Then, the pre-trained encoder is transferred to our private dataset for extracting the rich semantic features.
Two decoders are constructed to jointly segment the present brain tumor and predict its future tumor recurrence location.
arXiv Detail & Related papers (2023-04-11T02:45:38Z) - Exploiting Partial Common Information Microstructure for Multi-Modal
Brain Tumor Segmentation [11.583406152227637]
Learning with multiple modalities is crucial for automated brain tumor segmentation from magnetic resonance imaging data.
Existing approaches are oblivious to partial common information shared by subsets of the modalities.
In this paper, we show that identifying such partial common information can significantly boost the discriminative power of image segmentation models.
arXiv Detail & Related papers (2023-02-06T01:28:52Z) - mmFormer: Multimodal Medical Transformer for Incomplete Multimodal
Learning of Brain Tumor Segmentation [38.22852533584288]
We propose a novel Medical Transformer (mmFormer) for incomplete multimodal learning with three main components.
The proposed mmFormer outperforms the state-of-the-art methods for incomplete multimodal brain tumor segmentation on almost all subsets of incomplete modalities.
arXiv Detail & Related papers (2022-06-06T08:41:56Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Feature-enhanced Generation and Multi-modality Fusion based Deep Neural
Network for Brain Tumor Segmentation with Missing MR Modalities [2.867517731896504]
The main problem is that not all types of MRIs are always available in clinical exams.
We propose a novel brain tumor segmentation network in the case of missing one or more modalities.
The proposed network consists of three sub-networks: a feature-enhanced generator, a correlation constraint block and a segmentation network.
arXiv Detail & Related papers (2021-11-08T10:59:40Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.