Explainable Deep Learning for Pediatric Pneumonia Detection in Chest X-Ray Images
- URL: http://arxiv.org/abs/2601.09814v1
- Date: Wed, 14 Jan 2026 19:21:32 GMT
- Title: Explainable Deep Learning for Pediatric Pneumonia Detection in Chest X-Ray Images
- Authors: Adil O. Khadidos, Aziida Nanyonga, Alaa O. Khadidos, Olfat M. Mirza, Mustafa Tahsin Yilmaz,
- Abstract summary: Pneumonia remains a leading cause of morbidity and mortality among children worldwide.<n>This study compares two state-of-the-art convolutional neural network (CNN) architectures for automated pediatric pneumonia detection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background: Pneumonia remains a leading cause of morbidity and mortality among children worldwide, emphasizing the need for accurate and efficient diagnostic support tools. Deep learning has shown strong potential in medical image analysis, particularly for chest X-ray interpretation. This study compares two state-of-the-art convolutional neural network (CNN) architectures for automated pediatric pneumonia detection. Methods: A publicly available dataset of 5,863 pediatric chest X-ray images was used. Images were preprocessed through normalization, resizing, and data augmentation to enhance generalization. DenseNet121 and EfficientNet-B0 were fine-tuned using pretrained ImageNet weights under identical training settings. Performance was evaluated using accuracy, F1-score, Matthews Correlation Coefficient (MCC), and recall. Model explainability was incorporated using Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) to visualize image regions influencing predictions. Results: EfficientNet-B0 outperformed DenseNet121, achieving an accuracy of 84.6%, F1-score of 0.8899, and MCC of 0.6849. DenseNet121 achieved 79.7% accuracy, an F1-score of 0.8597, and MCC of 0.5852. Both models demonstrated high recall values above 0.99, indicating strong sensitivity to pneumonia detection. Grad-CAM and LIME visualizations showed consistent focus on clinically relevant lung regions, supporting the reliability of model decisions. Conclusions: EfficientNet-B0 provided a more balanced and computationally efficient performance compared to DenseNet121, making it a strong candidate for clinical deployment. The integration of explainability techniques enhances transparency and trustworthiness in AI-assisted pediatric pneumonia diagnosis.
Related papers
- Deep Learning Approach for the Diagnosis of Pediatric Pneumonia Using Chest X-ray Imaging [0.0]
This study investigates the performance of state-of-the-art convolutional neural network (CNN) architectures ResNetRS, RegNet, and EfficientNetV2.<n>A curated subset of 1,000 chest X-ray images was extracted from a publicly available dataset originally comprising 5,856 pediatric images.<n>RegNet achieved the highest classification performance with an accuracy of 92.4 and a sensitivity of 90.1, followed by ResNetRS (accuracy: 91.9, sensitivity: 89.3) and EfficientNetV2 (accuracy: 88.5, sensitivity: 88.1)
arXiv Detail & Related papers (2025-12-31T00:07:06Z) - Weakly Supervised Pneumonia Localization from Chest X-Rays Using Deep Neural Network and Grad-CAM Explanations [0.0]
This study proposes a weakly supervised deep learning framework for pneumonia classification and localization from chest X-rays.<n>Instead of costly pixel-level annotations, our approach utilizes image-level labels to generate clinically meaningful heatmaps.
arXiv Detail & Related papers (2025-11-01T08:44:24Z) - Explainable Deep Learning in Medical Imaging: Brain Tumor and Pneumonia Detection [0.0]
This paper presents an explainable deep learning framework for detecting brain tumors in MRI scans and pneumonia in chest X-ray images.<n>DenseNet121 consistently outperformed ResNet50 with 94.3 percent vs. 92.5 percent accuracy for brain tumors and 89.1 percent vs. 84.4 percent accuracy for pneumonia.
arXiv Detail & Related papers (2025-10-21T22:44:40Z) - LightPneumoNet: Lightweight Pneumonia Classifier [0.0]
This study introduces LightPneumoNet, an efficient, lightweight convolutional neural network (CNN) built from scratch.<n>Our model was trained on a public dataset of 5,856 chest X-ray images.<n>On an independent test set, our model delivered exceptional performance, achieving an overall accuracy of 0.942, precision of 0.92, and an F1-Score of 0.96.
arXiv Detail & Related papers (2025-10-13T10:14:17Z) - Development and validation of an AI foundation model for endoscopic diagnosis of esophagogastric junction adenocarcinoma: a cohort and deep learning study [33.84976409983329]
The early detection of esophagogastric junction adenocarcinoma (EGJA) is crucial for improving patient prognosis, yet its current diagnosis is highly operator-dependent.<n>This paper aims to make the first attempt to develop an artificial intelligence foundation model-based method for both screening and staging diagnosis of EGJA using endoscopic images.
arXiv Detail & Related papers (2025-09-22T12:03:40Z) - From Embeddings to Accuracy: Comparing Foundation Models for Radiographic Classification [33.96915720287914]
We evaluate embeddings from seven foundation models for training lightweight adapters in radiography classification.<n>The combination of MedImageInsight embeddings with an SVM and an adapter achieved the highest mean area under the curve (mAUC) of 93.1%.<n>These lightweight groups are computationally efficient, training in minutes and performing inference in seconds on a CPU, making them practical for clinical use.
arXiv Detail & Related papers (2025-05-16T03:39:46Z) - InfLocNet: Enhanced Lung Infection Localization and Disease Detection from Chest X-Ray Images Using Lightweight Deep Learning [0.5242869847419834]
This paper presents a novel, lightweight deep learning based segmentation-classification network.
It is designed to enhance the detection and localization of lung infections using chest X-ray images.
Our model achieves remarkable results with an Intersection over Union (IoU) of 93.59% and a Dice Similarity Coefficient (DSC) of 97.61% in lung area segmentation.
arXiv Detail & Related papers (2024-08-12T19:19:23Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - COVID-19 Detection Based on Self-Supervised Transfer Learning Using
Chest X-Ray Images [38.65823547986758]
We propose a new learning scheme called self-supervised transfer learning for detecting COVID-19 from chest X-ray (CXR) images.
We provide quantitative evaluation on the largest open COVID-19 CXR dataset and qualitative results for visual inspection.
arXiv Detail & Related papers (2022-12-19T07:10:51Z) - CIRCA: comprehensible online system in support of chest X-rays-based
COVID-19 diagnosis [37.41181188499616]
Deep learning techniques can help in the faster detection of COVID-19 cases and monitoring of disease progression.
Five different datasets were used to construct a representative dataset of 23 799 CXRs for model training.
A U-Net-based model was developed to identify a clinically relevant region of the CXR.
arXiv Detail & Related papers (2022-10-11T13:30:34Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.