ICHPro: Intracerebral Hemorrhage Prognosis Classification Via
Joint-attention Fusion-based 3d Cross-modal Network
- URL: http://arxiv.org/abs/2402.11307v1
- Date: Sat, 17 Feb 2024 15:31:46 GMT
- Title: ICHPro: Intracerebral Hemorrhage Prognosis Classification Via
Joint-attention Fusion-based 3d Cross-modal Network
- Authors: Xinlei Yu, Xinyang Li, Ruiquan Ge, Shibin Wu, Ahmed Elazab, Jichao
Zhu, Lingyan Zhang, Gangyong Jia, Taosheng Xu, Xiang Wan, Changmiao Wang
- Abstract summary: Intracerebral Hemorrhage (ICH) is the deadliest subtype of stroke, necessitating timely and accurate prognostic evaluation to reduce mortality and disability.
We propose a joint-attention fusion-based 3D cross-modal network termed ICHPro that simulates the ICH prognosis interpretation process utilized by neurosurgeons.
- Score: 19.77538127076489
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intracerebral Hemorrhage (ICH) is the deadliest subtype of stroke,
necessitating timely and accurate prognostic evaluation to reduce mortality and
disability. However, the multi-factorial nature and complexity of ICH make
methods based solely on computed tomography (CT) image features inadequate.
Despite the capacity of cross-modal networks to fuse additional information,
the effective combination of different modal features remains a significant
challenge. In this study, we propose a joint-attention fusion-based 3D
cross-modal network termed ICHPro that simulates the ICH prognosis
interpretation process utilized by neurosurgeons. ICHPro includes a
joint-attention fusion module to fuse features from CT images with demographic
and clinical textual data. To enhance the representation of cross-modal
features, we introduce a joint loss function. ICHPro facilitates the extraction
of richer cross-modal features, thereby improving classification performance.
Upon testing our method using a five-fold cross-validation, we achieved an
accuracy of 89.11%, an F1 score of 0.8767, and an AUC value of 0.9429. These
results outperform those obtained from other advanced methods based on the test
dataset, thereby demonstrating the superior efficacy of ICHPro. The code is
available at our Github: https://github.com/YU-deep/ICH.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - MM-SurvNet: Deep Learning-Based Survival Risk Stratification in Breast
Cancer Through Multimodal Data Fusion [18.395418853966266]
We propose a novel deep learning approach for breast cancer survival risk stratification.
We employ vision transformers, specifically the MaxViT model, for image feature extraction, and self-attention to capture intricate image relationships at the patient level.
A dual cross-attention mechanism fuses these features with genetic data, while clinical data is incorporated at the final layer to enhance predictive accuracy.
arXiv Detail & Related papers (2024-02-19T02:31:36Z) - Parkinson's Disease Classification Using Contrastive Graph Cross-View Learning with Multimodal Fusion of SPECT Images and Clinical Features [5.660131312162423]
Parkinson's Disease (PD) affects millions globally, impacting movement.
Prior research utilized deep learning for PD prediction, primarily focusing on medical images, neglecting the data's underlying manifold structure.
This work proposes a multimodal approach encompassing both image and non-image features, leveraging contrastive cross-view graph fusion for PD classification.
arXiv Detail & Related papers (2023-11-25T02:32:46Z) - Enhancing mTBI Diagnosis with Residual Triplet Convolutional Neural
Network Using 3D CT [1.0621519762024807]
We introduce an innovative approach to enhance mTBI diagnosis using 3D Computed Tomography (CT) images.
We propose a Residual Triplet Convolutional Neural Network (RTCNN) model to distinguish between mTBI cases and healthy ones.
Our RTCNN model shows promising performance in mTBI diagnosis, achieving an average accuracy of 94.3%, a sensitivity of 94.1%, and a specificity of 95.2%.
arXiv Detail & Related papers (2023-11-23T20:41:46Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Brain Imaging-to-Graph Generation using Adversarial Hierarchical Diffusion Models for MCI Causality Analysis [44.45598796591008]
Brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment analysis.
The hierarchical transformers in the generator are designed to estimate the noise at multiple scales.
Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model.
arXiv Detail & Related papers (2023-05-18T06:54:56Z) - MAF-Net: Multiple attention-guided fusion network for fundus vascular
image segmentation [1.3295074739915493]
We propose a multiple attention-guided fusion network (MAF-Net) to accurately detect blood vessels in retinal fundus images.
Traditional UNet-based models may lose partial information due to explicitly modeling long-distance dependencies.
We show that our method produces satisfactory results compared to some state-of-the-art methods.
arXiv Detail & Related papers (2023-05-05T15:22:20Z) - TranSOP: Transformer-based Multimodal Classification for Stroke
Treatment Outcome Prediction [2.358784542343728]
We propose a transformer-based multimodal network (TranSOP) for a classification approach that employs clinical metadata and imaging information.
This includes a fusion module to efficiently combine 3D non-contrast computed tomography (NCCT) features and clinical information.
In comparative experiments using unimodal and multimodal data, we achieve a state-of-the-art AUC score of 0.85.
arXiv Detail & Related papers (2023-01-25T21:05:10Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - Diagnose Like a Radiologist: Hybrid Neuro-Probabilistic Reasoning for
Attribute-Based Medical Image Diagnosis [42.624671531003166]
We introduce a hybrid neuro-probabilistic reasoning algorithm for verifiable attribute-based medical image diagnosis.
We have successfully applied our hybrid reasoning algorithm to two challenging medical image diagnosis tasks.
arXiv Detail & Related papers (2022-08-19T12:06:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.