A Novel Autoencoders-LSTM Model for Stroke Outcome Prediction using
Multimodal MRI Data
- URL: http://arxiv.org/abs/2303.09484v1
- Date: Thu, 16 Mar 2023 17:00:45 GMT
- Title: A Novel Autoencoders-LSTM Model for Stroke Outcome Prediction using
Multimodal MRI Data
- Authors: Nima Hatami and Laura Mechtouff and David Rousseau and Tae-Hee Cho and
Omer Eker and Yves Berthezene and Carole Frindel
- Abstract summary: A novel machine learning model is proposed for stroke outcome prediction using multimodal Magnetic Resonance Imaging (MRI)
The proposed model consists of two serial levels of Autoencoders (AEs), where different AEs at level 1 are used for learning unimodal features from different MRI modalities.
The sequences of multimodal features of a given patient are then used by an LSTM network for predicting outcome score.
- Score: 1.4392882343006919
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Patient outcome prediction is critical in management of ischemic stroke. In
this paper, a novel machine learning model is proposed for stroke outcome
prediction using multimodal Magnetic Resonance Imaging (MRI). The proposed
model consists of two serial levels of Autoencoders (AEs), where different AEs
at level 1 are used for learning unimodal features from different MRI
modalities and a AE at level 2 is used to combine the unimodal features into
compressed multimodal features. The sequences of multimodal features of a given
patient are then used by an LSTM network for predicting outcome score. The
proposed AE2-LSTM model is proved to be an effective approach for better
addressing the multimodality and volumetric nature of MRI data. Experimental
results show that the proposed AE2-LSTM outperforms the existing state-of-the
art models by achieving highest AUC=0.71 and lowest MAE=0.34.
Related papers
- NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - A Multi-Grained Symmetric Differential Equation Model for Learning
Protein-Ligand Binding Dynamics [74.93549765488103]
In drug discovery, molecular dynamics simulation provides a powerful tool for predicting binding affinities, estimating transport properties, and exploring pocket sites.
We propose NeuralMD, the first machine learning surrogate that can facilitate numerical MD and provide accurate simulations in protein-ligand binding.
We show the efficiency and effectiveness of NeuralMD, with a 2000$times$ speedup over standard numerical MD simulation and outperforming all other ML approaches by up to 80% under the stability metric.
arXiv Detail & Related papers (2024-01-26T09:35:17Z) - Multi-Dimension-Embedding-Aware Modality Fusion Transformer for
Psychiatric Disorder Clasification [13.529183496842819]
We construct a deep learning architecture that takes as input 2D time series of rs-fMRI and 3D volumes T1w.
We show that our proposed MFFormer performs better than that using a single modality or multi-modality MRI on schizophrenia and bipolar disorder diagnosis.
arXiv Detail & Related papers (2023-10-04T10:02:04Z) - CoLa-Diff: Conditional Latent Diffusion Model for Multi-Modal MRI
Synthesis [11.803971719704721]
Most diffusion-based MRI synthesis models are using a single modality.
We propose the first diffusion-based multi-modality MRI synthesis model, namely Conditioned Latent Diffusion Model (CoLa-Diff)
Our experiments demonstrate that CoLa-Diff outperforms other state-of-the-art MRI synthesis methods.
arXiv Detail & Related papers (2023-03-24T15:46:10Z) - CNN-LSTM Based Multimodal MRI and Clinical Data Fusion for Predicting
Functional Outcome in Stroke Patients [1.5250925845050138]
Clinical outcome prediction plays an important role in stroke patient management.
From a machine learning point-of-view, one of the main challenges is dealing with heterogeneous data.
In this paper a multimodal convolutional neural network - long short-term memory (CNN-LSTM) based ensemble model is proposed.
arXiv Detail & Related papers (2022-05-11T14:46:01Z) - A Learnable Variational Model for Joint Multimodal MRI Reconstruction
and Synthesis [4.056490719080639]
We propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI.
The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality.
arXiv Detail & Related papers (2022-04-08T01:35:19Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis [8.363448006582065]
We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
arXiv Detail & Related papers (2021-05-06T17:22:22Z) - Confidence-guided Lesion Mask-based Simultaneous Synthesis of Anatomic
and Molecular MR Images in Patients with Post-treatment Malignant Gliomas [65.64363834322333]
Confidence Guided SAMR (CG-SAMR) synthesizes data from lesion information to multi-modal anatomic sequences.
module guides the synthesis based on confidence measure about the intermediate results.
experiments on real clinical data demonstrate that the proposed model can perform better than the state-of-theart synthesis methods.
arXiv Detail & Related papers (2020-08-06T20:20:22Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.