Predicting recovery following stroke: deep learning, multimodal data and
feature selection using explainable AI
- URL: http://arxiv.org/abs/2310.19174v1
- Date: Sun, 29 Oct 2023 22:31:20 GMT
- Title: Predicting recovery following stroke: deep learning, multimodal data and
feature selection using explainable AI
- Authors: Adam White, Margarita Saranti, Artur d'Avila Garcez, Thomas M. H.
Hope, Cathy J. Price, Howard Bowman
- Abstract summary: Major challenges include the very high dimensionality of neuroimaging data and the relatively small size of the datasets available for learning.
We introduce a novel approach of training a convolutional neural network (CNN) on images that combine regions-of-interest extracted from MRIs.
We conclude by proposing how the current models could be improved to achieve even higher levels of accuracy using images from hospital scanners.
- Score: 3.797471910783104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning offers great potential for automated prediction of
post-stroke symptoms and their response to rehabilitation. Major challenges for
this endeavour include the very high dimensionality of neuroimaging data, the
relatively small size of the datasets available for learning, and how to
effectively combine neuroimaging and tabular data (e.g. demographic information
and clinical characteristics). This paper evaluates several solutions based on
two strategies. The first is to use 2D images that summarise MRI scans. The
second is to select key features that improve classification accuracy.
Additionally, we introduce the novel approach of training a convolutional
neural network (CNN) on images that combine regions-of-interest extracted from
MRIs, with symbolic representations of tabular data. We evaluate a series of
CNN architectures (both 2D and a 3D) that are trained on different
representations of MRI and tabular data, to predict whether a composite measure
of post-stroke spoken picture description ability is in the aphasic or
non-aphasic range. MRI and tabular data were acquired from 758 English speaking
stroke survivors who participated in the PLORAS study. The classification
accuracy for a baseline logistic regression was 0.678 for lesion size alone,
rising to 0.757 and 0.813 when initial symptom severity and recovery time were
successively added. The highest classification accuracy 0.854 was observed when
8 regions-of-interest was extracted from each MRI scan and combined with lesion
size, initial severity and recovery time in a 2D Residual Neural Network.Our
findings demonstrate how imaging and tabular data can be combined for high
post-stroke classification accuracy, even when the dataset is small in machine
learning terms. We conclude by proposing how the current models could be
improved to achieve even higher levels of accuracy using images from hospital
scanners.
Related papers
- Self-Supervised Pretext Tasks for Alzheimer's Disease Classification using 3D Convolutional Neural Networks on Large-Scale Synthetic Neuroimaging Dataset [11.173478552040441]
Alzheimer's Disease (AD) induces both localised and widespread neural degenerative changes throughout the brain.
In this work, we evaluated several unsupervised methods to train a feature extractor for downstream AD vs. CN classification.
arXiv Detail & Related papers (2024-06-20T11:26:32Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - DynDepNet: Learning Time-Varying Dependency Structures from fMRI Data
via Dynamic Graph Structure Learning [58.94034282469377]
We propose DynDepNet, a novel method for learning the optimal time-varying dependency structure of fMRI data induced by downstream prediction tasks.
Experiments on real-world fMRI datasets, for the task of sex classification, demonstrate that DynDepNet achieves state-of-the-art results.
arXiv Detail & Related papers (2022-09-27T16:32:11Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - Evaluating U-net Brain Extraction for Multi-site and Longitudinal
Preclinical Stroke Imaging [0.4310985013483366]
Convolutional neural networks (CNNs) can improve accuracy and reduce operator time.
We developed a deep-learning mouse brain extraction tool by using a U-net CNN.
We trained, validated, and tested a typical U-net model on 240 multimodal MRI datasets.
arXiv Detail & Related papers (2022-03-11T02:00:27Z) - A Novel Framework for Brain Tumor Detection Based on Convolutional
Variational Generative Models [6.726255259929498]
This paper introduces a novel framework for brain tumor detection and classification.
The proposed framework acquires an overall detection accuracy of 96.88%.
It highlights the promise of the proposed framework as an accurate low-overhead brain tumor detection system.
arXiv Detail & Related papers (2022-02-20T16:14:01Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - DFENet: A Novel Dimension Fusion Edge Guided Network for Brain MRI
Segmentation [0.0]
We propose a novel Dimension Fusion Edge-guided network (DFENet) that can meet both of these requirements by fusing the features of 2D and 3D CNNs.
The proposed model is robust, accurate, superior to the existing methods, and can be relied upon for biomedical applications.
arXiv Detail & Related papers (2021-05-17T15:43:59Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Comparison of Convolutional neural network training parameters for
detecting Alzheimers disease and effect on visualization [0.0]
Convolutional neural networks (CNN) have become a powerful tool for detecting patterns in image data.
Despite the high accuracy obtained from CNN models for MRI data so far, almost no papers provided information on the features or image regions driving this accuracy.
arXiv Detail & Related papers (2020-08-18T15:21:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.