GastroViT: A Vision Transformer Based Ensemble Learning Approach for Gastrointestinal Disease Classification with Grad CAM & SHAP Visualization
- URL: http://arxiv.org/abs/2509.26502v1
- Date: Tue, 30 Sep 2025 16:44:41 GMT
- Title: GastroViT: A Vision Transformer Based Ensemble Learning Approach for Gastrointestinal Disease Classification with Grad CAM & SHAP Visualization
- Authors: Sumaiya Tabassum, Md. Faysal Ahamed, Hafsa Binte Kibria, Md. Nahiduzzaman, Julfikar Haider, Muhammad E. H. Chowdhury, Mohammad Tariqul Islam,
- Abstract summary: This paper presents an ensemble of pre-trained vision transformers (ViTs) for accurately classifying endoscopic images of the GI tract.<n>ViTs, attention-based neural networks, have revolutionized image recognition by leveraging the transformative power of the transformer architecture.<n>The proposed model was evaluated on the publicly available HyperKvasir dataset with 10,662 images of 23 different GI diseases.
- Score: 6.752543644823974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The gastrointestinal (GI) tract of humans can have a wide variety of aberrant mucosal abnormality findings, ranging from mild irritations to extremely fatal illnesses. Prompt identification of gastrointestinal disorders greatly contributes to arresting the progression of the illness and improving therapeutic outcomes. This paper presents an ensemble of pre-trained vision transformers (ViTs) for accurately classifying endoscopic images of the GI tract to categorize gastrointestinal problems and illnesses. ViTs, attention-based neural networks, have revolutionized image recognition by leveraging the transformative power of the transformer architecture, achieving state-of-the-art (SOTA) performance across various visual tasks. The proposed model was evaluated on the publicly available HyperKvasir dataset with 10,662 images of 23 different GI diseases for the purpose of identifying GI tract diseases. An ensemble method is proposed utilizing the predictions of two pre-trained models, MobileViT_XS and MobileViT_V2_200, which achieved accuracies of 90.57% and 90.48%, respectively. All the individual models are outperformed by the ensemble model, GastroViT, with an average precision, recall, F1 score, and accuracy of 69%, 63%, 64%, and 91.98%, respectively, in the first testing that involves 23 classes. The model comprises only 20 million (M) parameters, even without data augmentation and despite the highly imbalanced dataset. For the second testing with 16 classes, the scores are even higher, with average precision, recall, F1 score, and accuracy of 87%, 86%, 87%, and 92.70%, respectively. Additionally, the incorporation of explainable AI (XAI) methods such as Grad-CAM (Gradient Weighted Class Activation Mapping) and SHAP (Shapley Additive Explanations) enhances model interpretability, providing valuable insights for reliable GI diagnosis in real-world settings.
Related papers
- Deep Unsupervised Anomaly Detection in Brain Imaging: Large-Scale Benchmarking and Bias Analysis [42.60508892284938]
We present a large-scale, multi-center benchmark of deep unsupervised anomaly detection for brain imaging.<n>We tested 2,221 T1w and 1,262 T2w scans spanning healthy datasets and diverse clinical cohorts.<n>Our benchmark establishes a transparent foundation for future research and highlights priorities for clinical translation.
arXiv Detail & Related papers (2025-12-01T11:03:27Z) - DeepGI: Explainable Deep Learning for Gastrointestinal Image Classification [0.0]
The study confronts common endoscopic challenges such as variable lighting, fluctuating camera angles, and frequent imaging artifacts.<n>The best performing models, VGG16 and MobileNetV2, each achieved a test accuracy of 96.5%.<n>The approach includes explainable AI via Grad-CAM visualization, enabling identification of image regions most influential to model predictions.
arXiv Detail & Related papers (2025-11-26T22:35:57Z) - Validating Vision Transformers for Otoscopy: Performance and Data-Leakage Effects [42.465094107111646]
This study evaluates the efficacy of vision transformer models, specifically Swin transformers, in enhancing the diagnostic accuracy of ear diseases.<n>The research utilised a real-world dataset from the Department of Otolaryngology at the Clinical Hospital of the Universidad de Chile.
arXiv Detail & Related papers (2025-11-06T23:20:37Z) - An Explainable Hybrid AI Framework for Enhanced Tuberculosis and Symptom Detection [55.35661671061754]
Tuberculosis remains a critical global health issue, particularly in resource-limited and remote areas.<n>We propose a framework which enhances disease and symptom detection on chest X-rays by integrating two supervised heads and a self-supervised head.<n>Our model achieves an accuracy of 98.85% for distinguishing between COVID-19, tuberculosis, and normal cases, and a macro-F1 score of 90.09% for multilabel symptom detection.
arXiv Detail & Related papers (2025-10-21T17:18:55Z) - PhenoKG: Knowledge Graph-Driven Gene Discovery and Patient Insights from Phenotypes Alone [40.61937241424789]
We propose a graph-based approach for predicting causative genes from patient phenotypes, with or without an available list of candidate genes.<n>Our model, combining graph neural networks and transformers, achieves substantial improvements over the current state-of-the-art.
arXiv Detail & Related papers (2025-06-16T05:54:12Z) - Subspecialty-Specific Foundation Model for Intelligent Gastrointestinal Pathology [38.30990764764014]
Digepath is a specialized foundation model for GI pathology.<n>It is pretrained on over 353 million multi-scale images from 210,043 H&E-stained slides of GI diseases.<n>It attains state-of-the-art performance on 33 out of 34 tasks related to GI pathology.
arXiv Detail & Related papers (2025-05-28T03:22:08Z) - Explainable AI-Driven Detection of Human Monkeypox Using Deep Learning and Vision Transformers: A Comprehensive Analysis [0.20482269513546453]
mpox is a zoonotic viral illness that poses a significant public health concern.<n>It is difficult to make an early clinical diagnosis because of how closely its symptoms match those of measles and chickenpox.<n>Medical imaging combined with deep learning (DL) techniques has shown promise in improving disease detection by analyzing affected skin areas.<n>Our study explore the feasibility to train deep learning and vision transformer-based models from scratch with publicly available skin lesion image dataset.
arXiv Detail & Related papers (2025-04-03T19:45:22Z) - Enhanced Multi-Class Classification of Gastrointestinal Endoscopic Images with Interpretable Deep Learning Model [0.7349657385817541]
This research introduces a novel approach to enhance classification accuracy using 8,000 labeled endoscopic images from the Kvasir dataset.<n>The proposed architecture eliminates reliance on data augmentation while preserving moderate model complexity.<n>The model achieves a test accuracy of 94.25%, alongside precision and recall of 94.29% and 94.24% respectively.
arXiv Detail & Related papers (2025-03-02T08:07:50Z) - Capsule Endoscopy Multi-classification via Gated Attention and Wavelet Transformations [1.5146068448101746]
Abnormalities in the gastrointestinal tract significantly influence the patient's health and require a timely diagnosis.<n>The work presents the process of developing and evaluating a novel model designed to classify gastrointestinal anomalies from a video frame.<n> integration of Omni Dimensional Gated Attention (OGA) mechanism and Wavelet transformation techniques into the model's architecture allowed the model to focus on the most critical areas.<n>The model's performance is benchmarked against two base models, VGG16 and ResNet50, demonstrating its enhanced ability to identify and classify a range of gastrointestinal abnormalities accurately.
arXiv Detail & Related papers (2024-10-25T08:01:35Z) - Domain-Adaptive Pre-training of Self-Supervised Foundation Models for Medical Image Classification in Gastrointestinal Endoscopy [0.024999074238880488]
Video capsule endoscopy has transformed gastrointestinal endoscopy (GIE) diagnostics by offering a non-invasive method for capturing detailed images of the gastrointestinal tract.<n>Video capsule endoscopy has transformed gastrointestinal endoscopy (GIE) diagnostics by offering a non-invasive method for capturing detailed images of the gastrointestinal tract.<n>However, its potential is limited by the sheer volume of images generated during the imaging procedure, which can take anywhere from 6-8 hours and often produce up to 1 million images.
arXiv Detail & Related papers (2024-10-21T22:52:25Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.76736949127792]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.<n>The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.<n>The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Liver Tumor Screening and Diagnosis in CT with Pixel-Lesion-Patient
Network [37.931408083443074]
Pixel-Lesion-pAtient Network (PLAN) is proposed to jointly segment and classify each lesion with improved anchor queries and a foreground-enhanced sampling loss.
PLAN achieves 95% and 96% in patient-level sensitivity and specificity.
On contrast-enhanced CT, our lesion-level detection precision, recall, and classification accuracy are 92%, 89%, and 86%, outperforming widely used CNN and transformers for lesion segmentation.
arXiv Detail & Related papers (2023-07-17T06:21:45Z) - GasHis-Transformer: A Multi-scale Visual Transformer Approach for
Gastric Histopathology Image Classification [30.497184157710873]
This paper proposes a multi-scale visual transformer model (GasHis-Transformer) for a gastric histopathology image classification (GHIC) task.
GasHis-Transformer model is built on two fundamental modules, including a global information module (GIM) and a local information module (LIM)
arXiv Detail & Related papers (2021-04-29T17:46:00Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.