Detection and Localization of Subdural Hematoma Using Deep Learning on Computed Tomography
- URL: http://arxiv.org/abs/2512.09393v1
- Date: Wed, 10 Dec 2025 07:37:42 GMT
- Title: Detection and Localization of Subdural Hematoma Using Deep Learning on Computed Tomography
- Authors: Vasiliki Stoumpou, Rohan Kumar, Bernard Burman, Diego Ojeda, Tapan Mehta, Dimitris Bertsimas,
- Abstract summary: Subdural hematoma (SDH) is a common neurosurgical emergency, with increasing incidence in aging populations.<n>There remains a need for transparent, high-performing systems that integrate multimodal clinical and imaging information.<n>We developed a multimodal deep-learning framework that integrates structured clinical variables, a 3D convolutional neural network trained on CT volumes, and a 2D segmentation model for SDH detection and localization.
- Score: 3.5333115085185614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background. Subdural hematoma (SDH) is a common neurosurgical emergency, with increasing incidence in aging populations. Rapid and accurate identification is essential to guide timely intervention, yet existing automated tools focus primarily on detection and provide limited interpretability or spatial localization. There remains a need for transparent, high-performing systems that integrate multimodal clinical and imaging information to support real-time decision-making. Methods. We developed a multimodal deep-learning framework that integrates structured clinical variables, a 3D convolutional neural network trained on CT volumes, and a transformer-enhanced 2D segmentation model for SDH detection and localization. Using 25,315 head CT studies from Hartford HealthCare (2015--2024), of which 3,774 (14.9\%) contained clinician-confirmed SDH, tabular models were trained on demographics, comorbidities, medications, and laboratory results. Imaging models were trained to detect SDH and generate voxel-level probability maps. A greedy ensemble strategy combined complementary predictors. Findings. Clinical variables alone provided modest discriminatory power (AUC 0.75). Convolutional models trained on CT volumes and segmentation-derived maps achieved substantially higher accuracy (AUCs 0.922 and 0.926). The multimodal ensemble integrating all components achieved the best overall performance (AUC 0.9407; 95\% CI, 0.930--0.951) and produced anatomically meaningful localization maps consistent with known SDH patterns. Interpretation. This multimodal, interpretable framework provides rapid and accurate SDH detection and localization, achieving high detection performance and offering transparent, anatomically grounded outputs. Integration into radiology workflows could streamline triage, reduce time to intervention, and improve consistency in SDH management.
Related papers
- Cancer-Net PCa-MultiSeg: Multimodal Enhancement of Prostate Cancer Lesion Segmentation Using Synthetic Correlated Diffusion Imaging [55.62977326180104]
Current deep learning approaches for prostate cancer lesion segmentation achieve limited performance.<n>We investigate synthetic correlated diffusion imaging (CDI$s$) as an enhancement to standard diffusion-based protocols.<n>Our results establish validated integration pathways for CDI$s$ as a practical drop-in enhancement for PCa lesion segmentation tasks.
arXiv Detail & Related papers (2025-11-11T04:16:12Z) - REN: Anatomically-Informed Mixture-of-Experts for Interstitial Lung Disease Diagnosis [32.83724094606554]
We introduce Regional Expert Networks (REN), the first anatomically-informed MoE framework tailored specifically for medical image classification.<n>REN leverages anatomical priors to train seven specialized experts, each dedicated to distinct lung lobes and bilateral lung combinations.<n>Through rigorous patient-level cross-validation, REN demonstrates strong generalizability and clinical interpretability.
arXiv Detail & Related papers (2025-10-06T15:35:08Z) - 3DViT-GAT: A Unified Atlas-Based 3D Vision Transformer and Graph Learning Framework for Major Depressive Disorder Detection Using Structural MRI Data [0.0]
Major depressive disorder (MDD) is a prevalent mental health condition that negatively impacts both individual well-being and global public health.<n>This paper develops a unified pipeline that utilizes Vision Transformers (ViTs) for extracting 3D region embeddings from sMRI data and Graph Neural Network (GNN) for classification.
arXiv Detail & Related papers (2025-09-15T17:10:39Z) - Learning from Heterogeneous Structural MRI via Collaborative Domain Adaptation for Late-Life Depression Assessment [24.340328016766183]
We propose a Collaborative Domain Adaptation framework for LLD detection using T1-weighted MRIs.<n>The framework consists of three stages: supervised training on labeled source data, self-supervised target feature adaptation and collaborative training on unlabeled target data.<n>Experiments conducted on multi-site T1-weighted MRI data demonstrate that the framework consistently outperforms state-of-the-art unsupervised domain adaptation methods.
arXiv Detail & Related papers (2025-07-30T01:38:32Z) - Explainable Parallel CNN-LSTM Model for Differentiating Ventricular Tachycardia from Supraventricular Tachycardia with Aberrancy in 12-Lead ECGs [4.263117296632119]
We propose a computationally efficient deep learning solution to improve diagnostic accuracy and provide model interpretability for clinical deployment.<n>A novel lightweight parallel deep architecture is introduced. Each pipeline processes individual ECG leads using two 1D-CNN blocks to extract local features.<n>The model achieved $95.63%$ accuracy ($95%$ CI: $93.07-98.19%$), with sensitivity=$95.10%$, specificity=$96.06%$, and F1-score=$95.12%$.
arXiv Detail & Related papers (2025-07-14T12:12:34Z) - A weakly-supervised deep learning model for fast localisation and delineation of the skeleton, internal organs, and spinal canal on Whole-Body Diffusion-Weighted MRI (WB-DWI) [0.0]
We developed an automated deep-learning pipeline based on a 3D patch-based Residual U-Net architecture.<n>We employed a multi-centre WB-DWI dataset comprising 532 scans from patients with Advanced Prostate Cancer (APC) or Multiple Myeloma (MM)<n>Relative median ADC differences between automated and manual full-body delineations were below 10%.<n>The model was 12x faster than the atlas-based registration algorithm.
arXiv Detail & Related papers (2025-03-26T17:03:46Z) - A Continual Learning-driven Model for Accurate and Generalizable Segmentation of Clinically Comprehensive and Fine-grained Whole-body Anatomies in CT [67.34586036959793]
There is no fully annotated CT dataset with all anatomies delineated for training.<n>We propose a novel continual learning-driven CT model that can segment complete anatomies.<n>Our single unified CT segmentation model, CL-Net, can highly accurately segment a clinically comprehensive set of 235 fine-grained whole-body anatomies.
arXiv Detail & Related papers (2025-03-16T23:55:02Z) - Towards Synergistic Deep Learning Models for Volumetric Cirrhotic Liver Segmentation in MRIs [1.5228650878164722]
Liver cirrhosis, a leading cause of global mortality, requires precise segmentation of ROIs for effective disease monitoring and treatment planning.
Existing segmentation models often fail to capture complex feature interactions and generalize across diverse datasets.
We propose a novel synergistic theory that leverages complementary latent spaces for enhanced feature interaction modeling.
arXiv Detail & Related papers (2024-08-08T14:41:32Z) - Classification of Heart Sounds Using Multi-Branch Deep Convolutional Network and LSTM-CNN [7.136933021609078]
This study develops and evaluates novel deep learning architectures that offer fast, accurate, and cost-effective methods for automatic diagnosis of cardiac diseases.<n>We propose two innovative methodologies: first, a Multi-Branch Deep Convolutional Neural Network (MBDCN) that emulates human auditory processing by utilizing diverse convolutional filter sizes and power spectrum input for enhanced feature extraction.<n>Second, a Long Short-Term Memory-Convolutional Neural (LSCN) model that integrates LSTM blocks with MBDCN to improve time-domain feature extraction.
arXiv Detail & Related papers (2024-07-15T13:02:54Z) - MEDPSeg: Hierarchical polymorphic multitask learning for the segmentation of ground-glass opacities, consolidation, and pulmonary structures on computed tomography [37.119000111386924]
MEDPSeg learns from heterogeneous chest CT targets through hierarchical polymorphic multitask learning (HPML)
We show PML enabling new state-of-the-art performance for GGO and consolidation segmentation tasks.
In addition, MEDPSeg simultaneously performs segmentation of the lung parenchyma, airways, pulmonary artery, and lung lesions, all in a single forward prediction.
arXiv Detail & Related papers (2023-12-04T21:46:39Z) - UNesT: Local Spatial Representation Learning with Hierarchical
Transformer for Efficient Medical Segmentation [29.287521185541298]
We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency.
We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency.
arXiv Detail & Related papers (2022-09-28T19:14:38Z) - Hepatic vessel segmentation based on 3Dswin-transformer with inductive
biased multi-head self-attention [46.46365941681487]
We propose a robust end-to-end vessel segmentation network called Indu BIased Multi-Head Attention Vessel Net.
We introduce the voxel-wise embedding rather than patch-wise embedding to locate precise liver vessel voxels.
On the other hand, we propose inductive biased multi-head self-attention which learns inductive biased relative positional embedding from absolute position embedding.
arXiv Detail & Related papers (2021-11-05T10:17:08Z) - 3D Graph Anatomy Geometry-Integrated Network for Pancreatic Mass
Segmentation, Diagnosis, and Quantitative Patient Management [21.788423806147378]
We exploit the feasibility to distinguish pancreatic ductal adenocarcinoma (PDAC) from the nine other nonPDAC masses using multi-phase CT imaging.
We propose a holistic segmentation-mesh-classification network (SMCN) to provide patient-level diagnosis.
arXiv Detail & Related papers (2020-12-08T19:38:01Z) - Harvesting, Detecting, and Characterizing Liver Lesions from Large-scale
Multi-phase CT Data via Deep Dynamic Texture Learning [24.633802585888812]
We propose a fully-automated and multi-stage liver tumor characterization framework for dynamic contrast computed tomography (CT)
Our system comprises four sequential processes of tumor proposal detection, tumor harvesting, primary tumor site selection, and deep texture-based tumor characterization.
arXiv Detail & Related papers (2020-06-28T19:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.