Enhancing mTBI Diagnosis with Residual Triplet Convolutional Neural
Network Using 3D CT
- URL: http://arxiv.org/abs/2311.14197v1
- Date: Thu, 23 Nov 2023 20:41:46 GMT
- Title: Enhancing mTBI Diagnosis with Residual Triplet Convolutional Neural
Network Using 3D CT
- Authors: Hanem Ellethy, Shekhar S. Chandra and Viktor Vegh
- Abstract summary: We introduce an innovative approach to enhance mTBI diagnosis using 3D Computed Tomography (CT) images.
We propose a Residual Triplet Convolutional Neural Network (RTCNN) model to distinguish between mTBI cases and healthy ones.
Our RTCNN model shows promising performance in mTBI diagnosis, achieving an average accuracy of 94.3%, a sensitivity of 94.1%, and a specificity of 95.2%.
- Score: 1.0621519762024807
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mild Traumatic Brain Injury (mTBI) is a common and challenging condition to
diagnose accurately. Timely and precise diagnosis is essential for effective
treatment and improved patient outcomes. Traditional diagnostic methods for
mTBI often have limitations in terms of accuracy and sensitivity. In this
study, we introduce an innovative approach to enhance mTBI diagnosis using 3D
Computed Tomography (CT) images and a metric learning technique trained with
triplet loss. To address these challenges, we propose a Residual Triplet
Convolutional Neural Network (RTCNN) model to distinguish between mTBI cases
and healthy ones by embedding 3D CT scans into a feature space. The triplet
loss function maximizes the margin between similar and dissimilar image pairs,
optimizing feature representations. This facilitates better context placement
of individual cases, aids informed decision-making, and has the potential to
improve patient outcomes. Our RTCNN model shows promising performance in mTBI
diagnosis, achieving an average accuracy of 94.3%, a sensitivity of 94.1%, and
a specificity of 95.2%, as confirmed through a five-fold cross-validation.
Importantly, when compared to the conventional Residual Convolutional Neural
Network (RCNN) model, the RTCNN exhibits a significant improvement, showcasing
a remarkable 22.5% increase in specificity, a notable 16.2% boost in accuracy,
and an 11.3% enhancement in sensitivity. Moreover, RTCNN requires lower memory
resources, making it not only highly effective but also resource-efficient in
minimizing false positives while maximizing its diagnostic accuracy in
distinguishing normal CT scans from mTBI cases. The quantitative performance
metrics provided and utilization of occlusion sensitivity maps to visually
explain the model's decision-making process further enhance the
interpretability and transparency of our approach.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - ICHPro: Intracerebral Hemorrhage Prognosis Classification Via
Joint-attention Fusion-based 3d Cross-modal Network [19.77538127076489]
Intracerebral Hemorrhage (ICH) is the deadliest subtype of stroke, necessitating timely and accurate prognostic evaluation to reduce mortality and disability.
We propose a joint-attention fusion-based 3D cross-modal network termed ICHPro that simulates the ICH prognosis interpretation process utilized by neurosurgeons.
arXiv Detail & Related papers (2024-02-17T15:31:46Z) - Interpretable 3D Multi-Modal Residual Convolutional Neural Network for
Mild Traumatic Brain Injury Diagnosis [1.0621519762024807]
We introduce an interpretable 3D Multi-Modal Residual Convolutional Neural Network (MRCNN) for mTBI diagnostic model enhanced with Occlusion Sensitivity Maps (OSM)
Our MRCNN model exhibits promising performance in mTBI diagnosis, demonstrating an average accuracy of 82.4%, sensitivity of 82.6%, and specificity of 81.6%, as validated by a five-fold cross-validation process.
arXiv Detail & Related papers (2023-09-22T01:58:27Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - An Optimized Ensemble Deep Learning Model For Brain Tumor Classification [3.072340427031969]
Inaccurate identification of brain tumors can significantly diminish life expectancy.
This study introduces an innovative optimization-based deep ensemble approach employing transfer learning (TL) to efficiently classify brain tumors.
Our approach achieves notable accuracy scores, with Xception, ResNet50V2, ResNet152V2, InceptionResNetV2, GAWO, and GSWO attaining 99.42%, 98.37%, 98.22%, 98.26%, 99.71%, and 99.76% accuracy, respectively.
arXiv Detail & Related papers (2023-05-22T09:08:59Z) - Acute ischemic stroke lesion segmentation in non-contrast CT images
using 3D convolutional neural networks [0.0]
We propose an automatic algorithm aimed at volumetric segmentation of acute ischemic stroke lesion in non-contrast computed tomography brain 3D images.
Our deep-learning approach is based on the popular 3D U-Net convolutional neural network architecture.
arXiv Detail & Related papers (2023-01-17T10:39:39Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Deep Implicit Statistical Shape Models for 3D Medical Image Delineation [47.78425002879612]
3D delineation of anatomical structures is a cardinal goal in medical imaging analysis.
Prior to deep learning, statistical shape models that imposed anatomical constraints and produced high quality surfaces were a core technology.
We present deep implicit statistical shape models (DISSMs), a new approach to delineation that marries the representation power of CNNs with the robustness of SSMs.
arXiv Detail & Related papers (2021-04-07T01:15:06Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - MRI brain tumor segmentation and uncertainty estimation using 3D-UNet
architectures [0.0]
This work studies 3D encoder-decoder architectures trained with patch-based techniques to reduce memory consumption and decrease the effect of unbalanced data.
We also introduce voxel-wise uncertainty information, both epistemic and aleatoric using test-time dropout (TTD) and data-augmentation (TTA) respectively.
The model and uncertainty estimation measurements proposed in this work have been used in the BraTS'20 Challenge for task 1 and 3 regarding tumor segmentation and uncertainty estimation.
arXiv Detail & Related papers (2020-12-30T19:28:53Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.