M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients
- URL: http://arxiv.org/abs/2006.10135v2
- Date: Tue, 14 Jul 2020 18:47:11 GMT
- Title: M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients
- Authors: Tao Zhou, Huazhu Fu, Yu Zhang, Changqing Zhang, Xiankai Lu, Jianbing
Shen, and Ling Shao
- Abstract summary: Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
- Score: 151.4352001822956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Early and accurate prediction of overall survival (OS) time can help to
obtain better treatment planning for brain tumor patients. Although many OS
time prediction methods have been developed and obtain promising results, there
are still several issues. First, conventional prediction methods rely on
radiomic features at the local lesion area of a magnetic resonance (MR) volume,
which may not represent the full image or model complex tumor patterns. Second,
different types of scanners (i.e., multi-modal data) are sensitive to different
brain regions, which makes it challenging to effectively exploit the
complementary information across multiple modalities and also preserve the
modality-specific properties. Third, existing methods focus on prediction
models, ignoring complex data-to-label relationships. To address the above
issues, we propose an end-to-end OS time prediction model; namely, Multi-modal
Multi-channel Network (M2Net). Specifically, we first project the 3D MR volume
onto 2D images in different directions, which reduces computational costs,
while preserving important information and enabling pre-trained models to be
transferred from other tasks. Then, we use a modality-specific network to
extract implicit and high-level features from different MR scans. A multi-modal
shared network is built to fuse these features using a bilinear pooling model,
exploiting their correlations to provide complementary information. Finally, we
integrate the outputs from each modality-specific network and the multi-modal
shared network to generate the final prediction result. Experimental results
demonstrate the superiority of our M2Net model over other methods.
Related papers
- M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction [24.323961146023358]
We propose a neural network model called M2EF-NNs for accurate cancer survival prediction.
To capture global information in the images, we use a pre-trained Vision Transformer (ViT) model.
We are the first to apply the Dempster-Shafer evidence theory (DST) to cancer survival prediction.
arXiv Detail & Related papers (2024-08-08T02:31:04Z) - HyperMM : Robust Multimodal Learning with Varying-sized Inputs [4.377889826841039]
HyperMM is an end-to-end framework designed for learning with varying-sized inputs.
We introduce a novel strategy for training a universal feature extractor using a conditional hypernetwork.
We experimentally demonstrate the advantages of our method in two tasks: Alzheimer's disease detection and breast cancer classification.
arXiv Detail & Related papers (2024-07-30T12:13:18Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - BTDNet: a Multi-Modal Approach for Brain Tumor Radiogenomic
Classification [14.547418131610188]
This paper proposes a novel multi-modal approach, BTDNet, to predict MGMT promoter methylation status.
The proposed method outperforms by large margins the state-of-the-art methods in the RSNA-ASNR-MICCAI BraTS 2021 Challenge.
arXiv Detail & Related papers (2023-10-05T11:56:06Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - TMSS: An End-to-End Transformer-based Multimodal Network for
Segmentation and Survival Prediction [0.0]
oncologists do not do this in their analysis but rather fuse the information in their brain from multiple sources such as medical images and patient history.
This work proposes a deep learning method that mimics oncologists' analytical behavior when quantifying cancer and estimating patient survival.
arXiv Detail & Related papers (2022-09-12T06:22:05Z) - Uncertainty-aware Multi-modal Learning via Cross-modal Random Network
Prediction [22.786774541083652]
We propose a new Uncertainty-aware Multi-modal Learner that estimates uncertainty by measuring feature density via Cross-modal Random Network Prediction (CRNP)
CRNP is designed to require little adaptation to translate between different prediction tasks, while having a stable training process.
arXiv Detail & Related papers (2022-07-22T03:00:10Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.