Interpretable 3D Multi-Modal Residual Convolutional Neural Network for
Mild Traumatic Brain Injury Diagnosis
- URL: http://arxiv.org/abs/2309.12572v1
- Date: Fri, 22 Sep 2023 01:58:27 GMT
- Title: Interpretable 3D Multi-Modal Residual Convolutional Neural Network for
Mild Traumatic Brain Injury Diagnosis
- Authors: Hanem Ellethy, Viktor Vegh and Shekhar S. Chandra
- Abstract summary: We introduce an interpretable 3D Multi-Modal Residual Convolutional Neural Network (MRCNN) for mTBI diagnostic model enhanced with Occlusion Sensitivity Maps (OSM)
Our MRCNN model exhibits promising performance in mTBI diagnosis, demonstrating an average accuracy of 82.4%, sensitivity of 82.6%, and specificity of 81.6%, as validated by a five-fold cross-validation process.
- Score: 1.0621519762024807
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mild Traumatic Brain Injury (mTBI) is a significant public health challenge
due to its high prevalence and potential for long-term health effects. Despite
Computed Tomography (CT) being the standard diagnostic tool for mTBI, it often
yields normal results in mTBI patients despite symptomatic evidence. This fact
underscores the complexity of accurate diagnosis. In this study, we introduce
an interpretable 3D Multi-Modal Residual Convolutional Neural Network (MRCNN)
for mTBI diagnostic model enhanced with Occlusion Sensitivity Maps (OSM). Our
MRCNN model exhibits promising performance in mTBI diagnosis, demonstrating an
average accuracy of 82.4%, sensitivity of 82.6%, and specificity of 81.6%, as
validated by a five-fold cross-validation process. Notably, in comparison to
the CT-based Residual Convolutional Neural Network (RCNN) model, the MRCNN
shows an improvement of 4.4% in specificity and 9.0% in accuracy. We show that
the OSM offers superior data-driven insights into CT images compared to the
Grad-CAM approach. These results highlight the efficacy of the proposed
multi-modal model in enhancing the diagnostic precision of mTBI.
Related papers
- AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scans [43.06293430764841]
This study presents an innovative method for Alzheimer's disease diagnosis using 3D MRI designed to enhance the explainability of model decisions.
Our approach adopts a soft attention mechanism, enabling 2D CNNs to extract volumetric representations.
With voxel-level precision, our method identified which specific areas are being paid attention to, identifying these predominant brain regions.
arXiv Detail & Related papers (2024-07-02T16:44:00Z) - Enhancing mTBI Diagnosis with Residual Triplet Convolutional Neural
Network Using 3D CT [1.0621519762024807]
We introduce an innovative approach to enhance mTBI diagnosis using 3D Computed Tomography (CT) images.
We propose a Residual Triplet Convolutional Neural Network (RTCNN) model to distinguish between mTBI cases and healthy ones.
Our RTCNN model shows promising performance in mTBI diagnosis, achieving an average accuracy of 94.3%, a sensitivity of 94.1%, and a specificity of 95.2%.
arXiv Detail & Related papers (2023-11-23T20:41:46Z) - Diagnosing Bipolar Disorder from 3-D Structural Magnetic Resonance
Images Using a Hybrid GAN-CNN Method [0.0]
This study proposes a hybrid GAN-CNN model to diagnose Bipolar Disorder (BD) from 3-D structural MRI Images (sMRI)
Based on the results, this study obtains an accuracy rate of 75.8%, a sensitivity of 60.3%, and a specificity of 82.5%, which are 3-5% higher than prior work.
arXiv Detail & Related papers (2023-10-11T10:17:41Z) - An Optimized Ensemble Deep Learning Model For Brain Tumor Classification [3.072340427031969]
Inaccurate identification of brain tumors can significantly diminish life expectancy.
This study introduces an innovative optimization-based deep ensemble approach employing transfer learning (TL) to efficiently classify brain tumors.
Our approach achieves notable accuracy scores, with Xception, ResNet50V2, ResNet152V2, InceptionResNetV2, GAWO, and GSWO attaining 99.42%, 98.37%, 98.22%, 98.26%, 99.71%, and 99.76% accuracy, respectively.
arXiv Detail & Related papers (2023-05-22T09:08:59Z) - Brain Imaging-to-Graph Generation using Adversarial Hierarchical Diffusion Models for MCI Causality Analysis [44.45598796591008]
Brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment analysis.
The hierarchical transformers in the generator are designed to estimate the noise at multiple scales.
Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model.
arXiv Detail & Related papers (2023-05-18T06:54:56Z) - Diagnose Like a Radiologist: Hybrid Neuro-Probabilistic Reasoning for
Attribute-Based Medical Image Diagnosis [42.624671531003166]
We introduce a hybrid neuro-probabilistic reasoning algorithm for verifiable attribute-based medical image diagnosis.
We have successfully applied our hybrid reasoning algorithm to two challenging medical image diagnosis tasks.
arXiv Detail & Related papers (2022-08-19T12:06:46Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Multi-Scale Convolutional Neural Network for Automated AMD
Classification using Retinal OCT Images [1.299941371793082]
Age-related macular degeneration (AMD) is the most common cause of blindness in developed countries, especially in people over 60 years of age.
Recent developments in deep learning have provided a unique opportunity for the development of fully automated diagnosis frameworks.
We propose a multi-scale convolutional neural network (CNN) capable of distinguishing pathologies using receptive fields with various sizes.
arXiv Detail & Related papers (2021-10-06T18:20:58Z) - Deep Implicit Statistical Shape Models for 3D Medical Image Delineation [47.78425002879612]
3D delineation of anatomical structures is a cardinal goal in medical imaging analysis.
Prior to deep learning, statistical shape models that imposed anatomical constraints and produced high quality surfaces were a core technology.
We present deep implicit statistical shape models (DISSMs), a new approach to delineation that marries the representation power of CNNs with the robustness of SSMs.
arXiv Detail & Related papers (2021-04-07T01:15:06Z) - MRI brain tumor segmentation and uncertainty estimation using 3D-UNet
architectures [0.0]
This work studies 3D encoder-decoder architectures trained with patch-based techniques to reduce memory consumption and decrease the effect of unbalanced data.
We also introduce voxel-wise uncertainty information, both epistemic and aleatoric using test-time dropout (TTD) and data-augmentation (TTA) respectively.
The model and uncertainty estimation measurements proposed in this work have been used in the BraTS'20 Challenge for task 1 and 3 regarding tumor segmentation and uncertainty estimation.
arXiv Detail & Related papers (2020-12-30T19:28:53Z) - Scale-Space Autoencoders for Unsupervised Anomaly Segmentation in Brain
MRI [47.26574993639482]
We show improved anomaly segmentation performance and the general capability to obtain much more crisp reconstructions of input data at native resolution.
The modeling of the laplacian pyramid further enables the delineation and aggregation of lesions at multiple scales.
arXiv Detail & Related papers (2020-06-23T09:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.