DIGEST: Deeply supervIsed knowledGE tranSfer neTwork learning for brain
tumor segmentation with incomplete multi-modal MRI scans
- URL: http://arxiv.org/abs/2211.07993v1
- Date: Tue, 15 Nov 2022 09:01:14 GMT
- Title: DIGEST: Deeply supervIsed knowledGE tranSfer neTwork learning for brain
tumor segmentation with incomplete multi-modal MRI scans
- Authors: Haoran Li, Cheng Li, Weijian Huang, Xiawu Zheng, Yan Xi, Shanshan Wang
- Abstract summary: Brain tumor segmentation based on multi-modal magnetic resonance imaging (MRI) plays a pivotal role in assisting brain cancer diagnosis, treatment, and postoperative evaluations.
Despite the achieved inspiring performance by existing automatic segmentation methods, multi-modal MRI data are still unavailable in real-world clinical applications.
We propose a Deeply supervIsed knowledGE tranSfer neTwork (DIGEST), which achieves accurate brain tumor segmentation under different modality-missing scenarios.
- Score: 16.93394669748461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Brain tumor segmentation based on multi-modal magnetic resonance imaging
(MRI) plays a pivotal role in assisting brain cancer diagnosis, treatment, and
postoperative evaluations. Despite the achieved inspiring performance by
existing automatic segmentation methods, multi-modal MRI data are still
unavailable in real-world clinical applications due to quite a few
uncontrollable factors (e.g. different imaging protocols, data corruption, and
patient condition limitations), which lead to a large performance drop during
practical applications. In this work, we propose a Deeply supervIsed knowledGE
tranSfer neTwork (DIGEST), which achieves accurate brain tumor segmentation
under different modality-missing scenarios. Specifically, a knowledge transfer
learning frame is constructed, enabling a student model to learn
modality-shared semantic information from a teacher model pretrained with the
complete multi-modal MRI data. To simulate all the possible modality-missing
conditions under the given multi-modal data, we generate incomplete multi-modal
MRI samples based on Bernoulli sampling. Finally, a deeply supervised knowledge
transfer loss is designed to ensure the consistency of the teacher-student
structure at different decoding stages, which helps the extraction of inherent
and effective modality representations. Experiments on the BraTS 2020 dataset
demonstrate that our method achieves promising results for the incomplete
multi-modal MR image segmentation task.
Related papers
- Disentangled Multimodal Brain MR Image Translation via Transformer-based
Modality Infuser [12.402947207350394]
We propose a transformer-based modality infuser designed to synthesize multimodal brain MR images.
In our method, we extract modality-agnostic features from the encoder and then transform them into modality-specific features.
We carried out experiments on the BraTS 2018 dataset, translating between four MR modalities.
arXiv Detail & Related papers (2024-02-01T06:34:35Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - SMU-Net: Style matching U-Net for brain tumor segmentation with missing
modalities [4.855689194518905]
We propose a style matching U-Net (SMU-Net) for brain tumour segmentation on MRI images.
Our co-training approach utilizes a content and style-matching mechanism to distill the informative features from the full-modality network into a missing modality network.
Our style matching module adaptively recalibrates the representation space by learning a matching function to transfer the informative and textural features from a full-modality path into a missing-modality path.
arXiv Detail & Related papers (2022-04-06T17:55:19Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Deep Transfer Learning for Brain Magnetic Resonance Image Multi-class
Classification [0.6117371161379209]
We have developed a framework that uses Deep Transfer Learning to perform a multi-classification of tumors in the brain MRI images.
Using the novel dataset and two publicly available MRI brain datasets, this proposed approach attained a classification accuracy of 86.40%.
Results of our experiments significantly demonstrate our proposed framework for transfer learning is a potential and effective method for brain tumor multi-classification tasks.
arXiv Detail & Related papers (2021-06-14T12:19:27Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.